computability · complexity · limits Interactive explainer · No libraries
All code runs in your browser
Theory of computation

What can be
computed at all?

Before asking how fast an algorithm runs, we must ask whether any algorithm exists. Computability theory answers the first question; complexity theory answers the second. Both began with a mathematician trying to automate proof — and ended by discovering deep limits on what machines can ever do.

"We must know. We will know."— David Hilbert, 1930, expressing the optimism that Gödel and Turing would soon sharply limit
Turing machines Halting problem Church-Turing thesis Decidability P vs NP NP-completeness Reductions
§01

Hilbert's programme and Gödel's shock

In 1900, David Hilbert proposed that all of mathematics could be grounded in a finite set of axioms from which every true statement could be formally proved — and every false one refuted. This was the Hilbert programme: complete, consistent, decidable. Mathematics as an algorithm.

In 1931, Kurt Gödel dealt Hilbert's hopes a devastating blow. His incompleteness theorems showed that any sufficiently powerful consistent formal system must contain true statements that cannot be proved within that system. The strongest form of Hilbert's programme — complete formal capture of arithmetic truth by one consistent system — could not succeed.

But Hilbert's third demand survived as a question: the Entscheidungsproblem (decision problem). Is there a mechanical procedure that, given any mathematical statement, decides whether it is true? Alonzo Church and Alan Turing independently answered: No — and in doing so, they had to define precisely what "mechanical procedure" means.

The remarkable thing is that proving a negative — that no algorithm exists for a problem — requires first defining what an algorithm is. Before 1936, no one had a precise definition. Turing and Church each provided one, and their definitions turned out to be equivalent. This is the Church-Turing thesis.
§02

The Turing machine

In 1936, Alan Turing defined a simple abstract machine to make precise the notion of "computation." It consists of an infinite tape of cells (each holding a symbol), a read/write head, a finite set of states, and a transition table. At each step: read the current cell, look up what to do in the transition table, write a symbol, move left or right, change state.

Formal definition
A Turing machine M = (Q, Σ, Γ, δ, q₀, qaccept, qreject) where: Q is a finite set of states; Σ is the input alphabet (not containing ☐); Γ is the tape alphabet (Σ ⊆ Γ, ☐ ∈ Γ); δ: Q × Γ → Q × Γ × {L, R} is the transition function; q₀ is the start state; qaccept and qreject are the halting states.

This minimalist device — arguably the simplest possible model of sequential computation — can compute anything any modern computer can compute. The tape is unbounded memory; the transition table is the program; state is the CPU register. Every algorithm ever written can be faithfully encoded as a Turing machine.

Interactive Turing machine — select a program and step through it
Speed: fast
turing_machine.js — a Turing machine interpreter
// A complete Turing machine interpreter
// Transition function: δ(state, symbol) → (newState, writeSymbol, direction)

class TuringMachine {
  constructor(transitions, startState, acceptState, rejectState) {
    this.transitions  = transitions;  // Map of "state,symbol" → [newState, write, dir]
    this.startState   = startState;
    this.acceptState  = acceptState;
    this.rejectState  = rejectState;
  }

  run(input, maxSteps = 10000) {
    const tape = ["B", ...[...input], "B"]; // B = blank
    let head  = 1;   // start just after leading blank
    let state = this.startState;
    let steps = 0;

    while (steps++ < maxSteps) {
      if (state === this.acceptState) return { result: "accept", steps, tape };
      if (state === this.rejectState) return { result: "reject", steps, tape };

      while (head < 0) { tape.unshift("B"); head++; }
      while (head >= tape.length) tape.push("B");

      const sym = tape[head];
      const key = `${state},${sym}`;
      const trans = this.transitions[key];

      if (!trans) return { result: "reject", steps, tape, reason: `no transition for (${state}, ${sym})` };
      const [newState, write, dir] = trans;
      tape[head] = write;
      state      = newState;
      head      += dir === "R" ? 1 : -1;
    }
    return { result: "timeout", steps };
  }
}

// Example 1: palindrome checker for {a,b}*
// Checks whether input reads same forwards and backwards
const palindromeTransitions = {
  // Mark leftmost symbol, remember it, find rightmost, compare
  "q0,a":["q1","X","R"], "q0,b":["q5","X","R"],
  "q0,X":["qa","X","R"], "q0,B":["qa","B","R"],
  // q1: saw 'a' on left, scan right
  "q1,a":["q1","a","R"], "q1,b":["q1","b","R"], "q1,X":["q1","X","R"],
  "q1,B":["q2","B","L"],
  // q2: at right end, check rightmost symbol
  "q2,a":["q3","X","L"], "q2,b":["qr","b","L"], "q2,X":["qa","X","L"],
  // q3: matched 'a', scan back left
  "q3,a":["q3","a","L"], "q3,b":["q3","b","L"], "q3,X":["q0","X","R"],
  // q5: saw 'b' on left — similar
  "q5,a":["q5","a","R"], "q5,b":["q5","b","R"], "q5,X":["q5","X","R"],
  "q5,B":["q6","B","L"],
  "q6,a":["qr","a","L"], "q6,b":["q7","X","L"], "q6,X":["qa","X","L"],
  "q7,a":["q7","a","L"], "q7,b":["q7","b","L"], "q7,X":["q0","X","R"],
};

const tm = new TuringMachine(palindromeTransitions, "q0", "qa", "qr");

const tests = [
  ["abba",    "accept"],
  ["abab",    "reject"],
  ["aabaa",   "accept"],
  ["a",       "accept"],
  ["",        "accept"],
];
console.log("Palindrome TM on {a,b}*:");
tests.forEach(([input, expected]) => {
  const { result, steps } = tm.run(input);
  const ok = result === expected ? "✓" : "✗";
  console.log(`  ${ok} "${input || "ε"}": ${result} (${steps} steps)`);
});
§03

The Church-Turing thesis

Alonzo Church independently defined computability using lambda calculus — a formal system of function abstraction and application, far removed from tape and head. In 1936, both Church and Turing proved the Entscheidungsproblem was unsolvable, using their respective models. Soon after, lambda-definability, recursive functions, and Turing-computability were shown to pick out the same class of computable functions.

Church-Turing Thesis
Every effectively calculable function can be computed by a Turing machine.
Equivalently: the informal notion of an effective procedure matches Turing-computability (and likewise lambda-definability and general recursiveness).

This is a thesis, not a theorem — it connects an informal idea ("effective procedure") to formal models. Stronger physical versions of the thesis talk about what the laws of physics permit; the standard Church-Turing thesis is narrower. Across the standard symbolic models studied in logic and computer science — Turing machines, lambda calculus, recursive functions, RAM machines, cellular automata, and quantum models for decidable problems — the class of computable functions has remained the same, even when efficiency differs dramatically.

Equivalent models — all computing the same class of functions
One consequence of the Church-Turing thesis: your programming language is Turing-complete if and only if it can simulate a Turing machine — which means every Turing-complete language can compute exactly the same set of functions. Python, C, Haskell, Java, Forth, and an infinite tape with a lookup table are all equivalent in computational power. Efficiency differs; power does not.
§04

The Halting Problem

With a precise definition of computation in hand, Turing asked: is there a Turing machine H that, given any Turing machine M and input w, decides whether M halts on w? This is the Halting Problem — and the answer is no. The proof is a masterpiece of diagonalisation.

Theorem (Turing 1936)
HALT = { ⟨M, w⟩ | M is a TM that halts on input w } is undecidable. No Turing machine can decide HALT.
The diagonalisation proof — step through the argument
halting.js — the diagonalisation argument in code
// The Halting Problem proof — encoded as closely as JavaScript allows
// We can show that any putative "halt decider" leads to a contradiction

// Some programs clearly halt or loop forever
const programs = {
  // Returns immediately
  trivialHalt:   () => 42,
  // Loops forever (we model with a flag)
  infiniteLoop:  () => { while(true){}; },
  // Halts for some inputs, loops on others
  conditionalHalt: (x) => { if (x > 0) return x; while(true){}; },
};

// The ORACLE: suppose we had a perfect halt decider
// halts(program, input) → true if program halts on input, false if it loops
function oracleHalts(program, input) {
  // This function CANNOT BE IMPLEMENTED. We're assuming it hypothetically.
  // A real implementation would need to solve the halting problem.
  throw new Error("The oracle cannot exist — proof follows");
}

// The DIAGONAL MACHINE D — this is where the contradiction lives
// D(program): if oracleHalts(program, program) then loop; else halt
function D(program) {
  if (oracleHalts(program, program)) {
    while(true) {} // loop forever
  } else {
    return; // halt
  }
}

// Now ask: what does D do when given itself as input?
// Case 1: oracleHalts(D, D) = true  → D loops on D → oracle was WRONG (said halt)
// Case 2: oracleHalts(D, D) = false → D halts on D → oracle was WRONG (said loop)
// Both cases contradict the oracle being correct. Therefore the oracle cannot exist.

function demonstrateContradiction() {
  console.log("=== Halting Problem — Diagonalisation ===");
  console.log("Suppose oracle H(M, w) correctly decides whether M halts on w.");
  console.log("Construct D(M): if H(M, M) = 'halts', then loop; else halt.");
  console.log("");

  const cases = ["halts", "loops"];
  for (const oracleSays of cases) {
    console.log(`Case: H(D, D) says D ${oracleSays} on D`);
    if (oracleSays === "halts") {
      console.log(`  Then D, by construction, loops on D.`);
      console.log(`  But oracle said halts. ⟹ CONTRADICTION.`);
    } else {
      console.log(`  Then D, by construction, halts on D.`);
      console.log(`  But oracle said loops. ⟹ CONTRADICTION.`);
    }
    console.log("");
  }
  console.log("Both cases are contradictions. Therefore H cannot exist. □");
}
demonstrateContradiction();

// Concrete examples of programs whose halting status is unknown or hard:
console.log("--- Programs with hard halting questions ---");
console.log("Collatz(n): if n==1 return; if even, n=n/2; if odd, n=3n+1; repeat");
console.log("  Does Collatz halt on all positive integers? UNKNOWN (open problem)");
console.log("Goldbach verifier: halt when Goldbach conjecture counterexample found");
console.log("  Halts iff Goldbach is false. Halting status = truth of Goldbach. UNKNOWN");
§05

Decidability and recognisability

The Halting Problem opened a Pandora's box. We now have three tiers of languages (sets of strings), based on what Turing machines can do with them:

Hierarchy of languages
A language L is decidable (recursive) if some TM always halts and accepts L, rejects L̄.
A language L is recognisable (recursively enumerable) if some TM accepts every string in L (but may loop on strings not in L).
A language is co-recognisable if its complement is recognisable.
A language is undecidable if no TM decides it. Some undecidable languages are still recognisable; others are not.
The computability lattice — where problems live

The recognisable languages are also called Σ₁ (there exists a computation that accepts). The co-recognisable are Π₁. The decidable languages are Δ₁ = Σ₁ ∩ Π₁. This is the bottom of the arithmetical hierarchy — an infinite tower of increasingly undecidable problems.

decidability.js — decidable vs recognisable vs unrecognisable
// The three tiers of computability, illustrated with concrete examples

// === DECIDABLE: TM always halts with accept or reject ===
function decidePrimes(n) {
  // Decides whether n is prime — TM always terminates
  if (n < 2) return false;
  for (let i = 2; i <= Math.sqrt(n); i++)
    if (n % i === 0) return false;
  return true;
}

// === RECOGNISABLE BUT NOT DECIDABLE: HALT ===
// A recogniser for HALT: just run the machine and accept if it halts
// But if M loops, our recogniser loops too — never rejects
function haltRecogniser(M_code, w) {
  // This RECOGNISES HALT by simulating M on w
  // If M halts → we accept; if M loops → we loop (never reject)
  // This is why HALT is recognisable but not decidable
  return "[simulate M on w; accept iff M accepts; loop otherwise]";
}

// === NOT RECOGNISABLE: complement of HALT ===
// HALT_complement = {⟨M,w⟩ | M does NOT halt on w}
// To recognise it, we'd need to wait forever to confirm non-halting — impossible

// === Rice's Theorem: a remarkable generalisation ===
// "Every non-trivial semantic property of Turing machines is undecidable"
const riceExamples = [
  { property: "Does M accept the empty string?",           decidable: false },
  { property: "Does M halt on all inputs?",                 decidable: false },
  { property: "Does M compute the identity function?",       decidable: false },
  { property: "Does M accept a finite language?",            decidable: false },
  { property: "Does M accept a regular language?",           decidable: false },
  { property: "Is M's language empty?",                      decidable: false },
  { property: "Does M have exactly 7 states?",               decidable: true  }, // syntactic, not semantic
];

console.log("=== Rice's Theorem examples ===");
console.log("Every NON-TRIVIAL SEMANTIC property of Turing machines is undecidable.");
console.log("(Semantic = depends on what the TM computes, not on its description)\n");
riceExamples.forEach(ex => {
  const tag = ex.decidable ? "DECIDABLE (syntactic)" : "UNDECIDABLE (Rice's Thm)";
  console.log(`${ex.property}\n  → ${tag}`);
});

console.log("\n=== Implication for static analysis ===");
console.log("No total analyser can always give exact yes/no answers to");
console.log("non-trivial semantic questions about arbitrary programs.");
console.log("To stay useful, real tools restrict the language, approximate, or say 'unknown'.");

Rice's theorem has a sobering practical corollary: no total, general-purpose analyser for arbitrary programs can always return exact yes/no answers to non-trivial semantic questions. Real tools cope by restricting the language, proving only special properties, approximating conservatively, or returning unknown. This is not a failure of engineering; it reflects a mathematical limit.

§06

Reductions — the master technique

To prove a problem undecidable, we don't re-run Turing's diagonalisation argument each time. We use reduction: if we can transform any instance of a known-undecidable problem into an instance of the new problem (in a computable way), then the new problem is at least as hard.

Many-one reduction (mapping reduction)
A ≤ₘ B (A reduces to B) if there is a computable function f such that for all w: w ∈ A ⟺ f(w) ∈ B.
If A ≤ₘ B and B is decidable, then A is decidable.
Equivalently: if A ≤ₘ B and A is undecidable, then B is undecidable.
How reductions chain — follow the arrows
reductions.js — proving undecidability via reduction from HALT
// Reduction: if we can solve B, we can solve A
// Contrapositive: if A is undecidable, and A ≤m B, then B is undecidable
// Here the clean reduction is HALT ≤m co-EMPTY; since decidability is closed under complement, EMPTY is undecidable too

// HALT = { ⟨M,w⟩ | M halts on w }  — known undecidable
// EMPTY = { ⟨M⟩ | L(M) = ∅ }       — also undecidable

// The reduction function f: given ⟨M, w⟩, produce a new machine M'
// M' ignores its own input and simulates M on w
// If M halts on w → M' accepts everything → L(M') = Σ* ≠ ∅  → ⟨M'⟩ ∉ EMPTY
// If M loops on w → M' loops on everything → L(M') = ∅ → ⟨M'⟩ ∈ EMPTY
// Therefore: ⟨M,w⟩ ∈ HALT ⟺ f(⟨M,w⟩) ∉ EMPTY  (with negation → HALT ≤m co-EMPTY)

function reductionToEmpty(M_description, w) {
  // Produces a new TM description M' such that:
  // L(M') = ∅   iff M loops on w
  // L(M') = Σ*  iff M halts on w
  return {
    description: `M' that:
  1. Ignores its input string
  2. Simulates ${M_description} on "${w}"
  3. If simulation halts: accept the original input
  4. If simulation loops: loop forever (L(M') = ∅)`,
    property: "L(M') = ∅ iff M loops on w"
  };
}

console.log("=== Proof that EMPTY is undecidable ===");
console.log("We reduce HALT to the complement of EMPTY (co-EMPTY).");
console.log("Suppose for contradiction that EMPTY were decidable.\n");

const instance = reductionToEmpty("M_collatz", "any_input");
console.log("Given ⟨M, w⟩ (a HALT instance), construct M':");
console.log(instance.description);
console.log("\nNow run the EMPTY decider on M'.");
console.log("If EMPTY(M') = false → L(M') ≠ ∅ → M halts on w → HALT accepts ⟨M,w⟩");
console.log("If EMPTY(M') = true  → L(M') = ∅ → M loops on w → HALT rejects ⟨M,w⟩");
console.log("We have decided HALT. Contradiction. Therefore EMPTY is undecidable. □\n");

// A chain of reductions from HALT
const reductions = [
  { from: "HALT",      to: "EMPTY_LANGUAGE",  via: "modify M to simulate on w" },
  { from: "HALT",      to: "TOTAL_HALT",       via: "modify M to ignore input, run w" },
  { from: "HALT",      to: "EQ_TM",            via: "compare L(M) to fixed language" },
  { from: "EMPTY",     to: "FINITE_LANGUAGE",  via: "enumerate and intersect" },
  { from: "FINITE",    to: "REGULAR_LANGUAGE", via: "regular if finite or cofinite" },
];
console.log("Reduction chain from HALT:");
reductions.forEach(r => console.log(`  ${r.from} ≤m ${r.to}  (via: ${r.via})`));
§07

P and NP — from can we to how fast

Decidability asks whether a problem can be solved at all. Complexity theory asks: given that it can, how many resources (time, space) does it require? The most important complexity question in computer science is the relationship between P and NP.

P and NP
P = the class of decision problems solvable by a deterministic TM in polynomial time in the input size n.
NP = the class of decision problems where "yes" instances have polynomial-length certificates verifiable in polynomial time.
Equivalently: NP = problems solvable by a non-deterministic TM in polynomial time.
P ⊆ NP — every problem in P has a trivially-checkable certificate (just re-run the polynomial algorithm).

The intuition: NP captures problems that are easy to verify but apparently hard to solve. Given a completed Sudoku grid, checking it is instant. Finding the solution from scratch can be hard. More precisely: generalised Sudoku (where the board size grows) is in NP and is NP-complete; the fixed 9×9 newspaper puzzle is only a constant-size special case.

P vs NP — solve vs verify, with adjustable size
20
complexity_classes.js — P, NP, and the certificate model
// P and NP illustrated with concrete problems
// P: solvable in polynomial time
// NP: certificate verifiable in polynomial time

// === IN P: Graph reachability (BFS/DFS, O(V+E)) ===
function isReachable(graph, source, target) {
  const visited = new Set([source]);
  const queue   = [source];
  while (queue.length) {
    const node = queue.shift();
    if (node === target) return true;
    (graph[node] || []).forEach(n => { if (!visited.has(n)) { visited.add(n); queue.push(n); }});
  }
  return false;
}

// === IN NP: Boolean satisfiability (SAT) ===
// Solve: find assignment to variables making a formula true — potentially 2^n tries
// Verify: given an assignment, check in O(clauses) time
function verifySATCertificate(clauses, assignment) {
  // Certificate = variable assignment; verification is O(clauses) — polynomial
  return clauses.every(clause =>
    clause.some(literal => {
      const [variable, negated] = literal;
      return negated ? !assignment[variable] : assignment[variable];
    })
  );
}

// Brute-force SAT solver: tries all 2^n assignments
function solveSAT(variables, clauses) {
  const n = variables.length;
  for (let mask = 0; mask < (1 << n); mask++) {
    const assignment = {};
    variables.forEach((v, i) => { assignment[v] = !!(mask & (1 << i)); });
    if (verifySATCertificate(clauses, assignment)) return assignment;
  }
  return null; // unsatisfiable
}

// Example: (x₁ ∨ ¬x₂) ∧ (¬x₁ ∨ x₃) ∧ (x₂ ∨ x₃)
const vars    = ["x1", "x2", "x3"];
const clauses = [
  [["x1",false],["x2",true]],   // x1 ∨ ¬x2
  [["x1",true],["x3",false]],    // ¬x1 ∨ x3
  [["x2",false],["x3",false]],  // x2 ∨ x3
];

console.log("=== SAT: (x1 ∨ ¬x2) ∧ (¬x1 ∨ x3) ∧ (x2 ∨ x3) ===");
const solution = solveSAT(vars, clauses);
if (solution) {
  console.log(`SATISFIABLE: ${JSON.stringify(solution)}`);
  console.log(`Verify certificate: ${verifySATCertificate(clauses, solution)}`);
} else {
  console.log("UNSATISFIABLE");
}

// Complexity comparison
console.log("\n=== Time to solve vs verify ===");
[10, 20, 30, 50].forEach(n => {
  const verifyTime  = n;                // O(n) certificate check
  const bruteForce  = Math.pow(2, n);   // 2^n exhaustive search
  const polyAlgo    = n * Math.log2(n);  // n log n (hypothetical P algorithm)
  console.log(`n=${n}: verify=O(${verifyTime}), brute-force=2^${n}=${bruteForce.toLocaleString()}, poly-if-P≈${polyAlgo.toFixed(0)}`);
});
§08

NP-completeness

Within NP, some problems are hardest — as hard as anything in NP. A problem X is NP-hard if every problem in NP can be reduced to X in polynomial time. A problem is NP-complete if it is both NP-hard and in NP itself.

The first NP-complete problem was identified by Stephen Cook (1971) and independently by Leonid Levin: Boolean satisfiability (SAT). Cook proved that every NP problem reduces to SAT in polynomial time. Richard Karp then showed 21 more natural problems are NP-complete via reduction chains — including Hamiltonian cycle, vertex cover, 3-coloring, knapsack, and the traveling salesman problem.

Cook-Levin Theorem (1971)
SAT is NP-complete. Every problem in NP polynomial-time reduces to SAT.
Consequence: if SAT can be solved in polynomial time, then P = NP.
NP-completeness via Karp's 21 problems — click to trace the reduction
np_complete.js — verifiers, reductions, and classic NPC problems
// NP-complete problems: hard to solve, easy to verify
// All polynomial-time reducible to each other

// === 3-COLORING: can we colour a graph with 3 colours so no neighbours share? ===
function verify3Coloring(graph, coloring) {
  // Verify certificate: O(edges) — polynomial
  for (const [u, v] of graph.edges)
    if (coloring[u] === coloring[v]) return false;
  return true;
}

// === VERTEX COVER: does there exist a set S of k vertices covering all edges? ===
function verifyVertexCover(graph, cover, k) {
  if (cover.length > k) return false;
  const coverSet = new Set(cover);
  return graph.edges.every(([u,v]) => coverSet.has(u) || coverSet.has(v));
}

// === KNAPSACK: can we fill a knapsack with items summing to exactly W? ===
function verifyKnapsack(items, chosen, targetWeight) {
  const total = chosen.reduce((s, i) => s + items[i].weight, 0);
  return total === targetWeight;
}

// === Demonstrate: certificate verification is easy even when finding is hard ===
const graph = {
  vertices: [0,1,2,3,4],
  edges:    [[0,1],[1,2],[2,3],[3,4],[4,0],[0,2]],
};

console.log("=== NP certificate verification ===");
// Someone claims this 3-coloring works:
const colorClaim = { 0:"R", 1:"G", 2:"B", 3:"R", 4:"G" };
console.log(`3-Coloring ${JSON.stringify(colorClaim)}: ${verify3Coloring(graph, colorClaim) ? "VALID ✓" : "INVALID ✗"}`);
const badColor = { 0:"R", 1:"R", 2:"B", 3:"R", 4:"G" };
console.log(`3-Coloring ${JSON.stringify(badColor)}: ${verify3Coloring(graph, badColor) ? "VALID ✓" : "INVALID ✗ (0-1 same colour)"}`);

// Vertex cover
console.log(`\nVertex cover [0,2,4] (k=3): ` + verifyVertexCover(graph, [0,2,4], 3));
console.log("Vertex cover [0,1] (k=2):   " + verifyVertexCover(graph, [0,1], 2));

// The Cook reduction sketch: SAT ← 3-SAT ← 3-COLORING
console.log("\n=== NP-completeness: a partial reduction chain ===");
const reductions = [
  "SAT (Cook-Levin: every NP problem reduces here)",
  "  ← 3-SAT (each clause has ≤3 literals, same hardness)",
  "      ← INDEPENDENT SET (represent clauses as graph)",
  "          ← VERTEX COVER (complement of IS)",
  "              ← CLIQUE (complement of VC)",
  "  ← 3-COLORING (reduce 3-SAT via gadgets)",
  "  ← HAMILTONIAN CYCLE (via 3-SAT gadgets)",
  "      ← TRAVELING SALESMAN (TSP: iff Ham. cycle in metric graph)",
];
reductions.forEach(r => console.log(r));
§09

The P vs NP question

The question P = NP? is the most important open problem in computer science and one of the seven Millennium Prize Problems ($1,000,000 prize). Informally: if a solution can be quickly verified, can it always be quickly found?

Most experts believe P ≠ NP — that there is a genuine separation, that problems like SAT require superpolynomial time. But no proof exists. The barrier is that we lack good techniques for proving lower bounds on computational complexity. We know many upper bounds (polynomial algorithms); proving lower bounds has resisted all attacks for over 50 years.

What a proof either way would mean
If P = NP
• All NP-complete problems (SAT, TSP, protein folding, scheduling) become polynomial-time solvable
• Most current public-key cryptography breaks — RSA, ECC assume hardness of factoring/discrete log
• Drug discovery, materials science: exponential speedup in optimisation
• Many search-and-verification tasks would become dramatically easier
• "The consequences would be profound and mostly positive, except for cryptography" — Sipser
If P ≠ NP (most likely)
• Hardness assumptions become more plausible, but P ≠ NP alone would not prove modern cryptography secure
• NP-complete problems remain outside P, though not necessarily exponentially hard
• Focus shifts to approximation algorithms and special-case tractability
• Many optimisation problems still require heuristics, approximation, or superpolynomial worst-case time
• The complexity landscape we see now is "real" — not an artefact of our ignorance
The reason proving P ≠ NP is hard is itself illuminating. Scott Aaronson and others have shown that virtually every mathematical technique used to prove lower bounds in other settings (relativisation, natural proofs, algebrisation) is provably insufficient to resolve P vs NP. The problem may require entirely new mathematics.
p_vs_np.js — the practical impact of complexity classes
// Concrete numbers: why the P vs NP gap matters
// Polynomial vs exponential time on realistic inputs

function padRight(s, width) {
  s = String(s);
  while (s.length < width) s += " ";
  return s;
}

function formatTime(ops) {
  var opsPerSec = 1e9;
  var seconds = ops / opsPerSec;
  if (seconds < 1) return ((seconds * 1000).toFixed(2) + " ms");
  if (seconds < 60) return (seconds.toFixed(1) + " s");
  if (seconds < 3600) return ((seconds / 60).toFixed(1) + " min");
  if (seconds < 86400) return ((seconds / 3600).toFixed(1) + " hrs");
  if (seconds < 3.15e7) return ((seconds / 86400).toFixed(0) + " days");
  if (seconds < 3.15e9) return ((seconds / 3.15e7).toFixed(0) + " years");
  if (seconds < 3.15e16) return ((seconds / 3.15e9).toFixed(0) + "K years");
  return ops.toExponential(1) + " ops";
}

function factorialBounded(n) {
  var r = 1;
  for (var i = 2; i <= Math.min(n, 170); i++) r *= i;
  return r;
}

var algorithms = [
  { name: "Constant    O(1)",      ops: function(n) { return 1; } },
  { name: "Logarithmic O(log n)",  ops: function(n) { return Math.log(n) / Math.log(2); } },
  { name: "Linear      O(n)",      ops: function(n) { return n; } },
  { name: "Linearithm  O(n log n)", ops: function(n) { return n * (Math.log(n) / Math.log(2)); } },
  { name: "Quadratic   O(n^2)",    ops: function(n) { return n * n; } },
  { name: "Cubic       O(n^3)",    ops: function(n) { return n * n * n; } },
  { name: "Exponential O(2^n)",    ops: function(n) { return Math.pow(2, n); } },
  { name: "Factorial   O(n!)",     ops: function(n) { return factorialBounded(n); } }
];

var sizes = [10, 100, 1000];
console.log("Algorithm complexity — time at 10^9 operations/second:");
console.log(padRight("", 24) + padRight("n=10", 12) + padRight("n=100", 16) + padRight("n=1000", 20));
for (var a = 0; a < algorithms.length; a++) {
  var alg = algorithms[a];
  var row = padRight(alg.name, 24);
  for (var j = 0; j < sizes.length; j++) {
    var n = sizes[j];
    var width = (n === 10) ? 12 : (n === 100 ? 16 : 20);
    row += padRight(formatTime(alg.ops(n)), width);
  }
  console.log(row);
}

console.log("\n--- P algorithms: TRACTABLE (polynomial, manageable even at n=1000)");
console.log("--- EXP algorithms: INTRACTABLE (exponential, impossible beyond n≈60)");
console.log("\nKey: If P=NP, SAT and 3-COLORING would behave like the top rows above.");
§10

Beyond NP: the complexity landscape

P and NP are just the beginning. The full complexity zoo contains hundreds of classes. Here are the most important ones beyond NP:

P
Polynomial time. Practically efficient.
Sorting, shortest paths, linear programming, primality (AKS)
NP
Certificates verifiable in poly time. Contains P. NP ∩ co-NP problems likely not NPC.
SAT, Hamilton cycle, graph colouring, integer programming
NP-complete
Hardest in NP. Poly-reducible to each other. If any is in P, then P=NP.
3-SAT, TSP, Vertex Cover, Clique, Knapsack, 3-Coloring
PSPACE
Decidable in polynomial space (possibly exponential time). P ⊆ NP ⊆ PSPACE.
QBF, game theory (Go, checkers), TQBF complete for PSPACE
EXP
Exponential time. PSPACE ⊆ EXP. Known P ≠ EXP (time hierarchy theorem).
Chess (bounded board), Go (unbounded), generalized games
Undecidable
No algorithm exists. Not in any complexity class. Above the computable.
Halting problem, Post correspondence, Hilbert's 10th, tiling the plane
The Time Hierarchy Theorem proves that giving a TM more time genuinely lets it solve more problems — P ≠ EXP is provable. Similarly the Space Hierarchy Theorem gives L ≠ PSPACE. Ironically, we cannot prove P ≠ NP — which sits between two separation results we can prove (P ≠ EXP). This asymmetry is part of what makes P vs NP so mysteriously resistant.
ProblemClassBest knownNotes
Sorting n numbersPO(n log n)Provably optimal for comparison sort
Primality testingPPolynomial time (AKS; improved since)Open for centuries; Fermat, Miller-Rabin practical
Integer factoringNP ∩ co-NPSubexponential (number field sieve)Likely not NP-complete; basis of RSA security
Graph isomorphismNPQuasipolynomial (Babai 2015)Believed not NP-complete; status relative to P remains open
Boolean SAT (3-SAT)NP-complete2^O(n) brute force; heuristics in practiceFirst NP-complete problem (Cook 1971)
Traveling SalesmanNP-complete (decision)Held-Karp O(n²2ⁿ) exact; Christofides 1.5-approx for metric TSPEuclidean TSP has a PTAS; metric TSP has constant-factor approximations
Quantified Boolean FormulaPSPACE-completeEXP brute forceGeneralised 2-player game; complete for PSPACE
Halting ProblemUndecidableNo general decider; heuristics can exist but cannot solve the problem in full generality