Part VI — Chapter 20

Gödel & Turing — Limits of Knowledge

The shattering discoveries that revealed the inherent limitations of formal systems and computation

20.1 The Foundational Crisis

At the turn of the twentieth century, mathematics faced an existential crisis. The discovery of paradoxes in set theory — most dramatically Russell’s Paradox (1901), which showed that the set of all sets that do not contain themselves leads to a contradiction — threatened to undermine the logical foundations upon which all of mathematics rested.

Russell’s Paradox (1901)

Define $R = \{x \mid x \notin x\}$ — the set of all sets that do not contain themselves. Does $R$ contain itself?

  • If $R \in R$, then by definition $R \notin R$. Contradiction.
  • If $R \notin R$, then by definition $R \in R$. Contradiction.

This destroyed Frege’s logical system, which had no restrictions on set formation. Frege received Russell’s letter just as the second volume of his Grundgesetze der Arithmetik was going to press, and wrote in the appendix: “A scientist can hardly encounter anything more undesirable than to have the foundation collapse just as the work is finished.”

Three competing programs emerged to resolve the crisis, each proposing a different vision of what mathematics is and how it should be founded:

The Three Foundational Programs

Logicism (Frege, Russell, Whitehead): Mathematics is reducible to logic. All mathematical truths are logical truths, and all mathematical objects are logical constructions. The program was pursued in Principia Mathematica but was ultimately undermined by Gödel’s theorems and the need for non-logical axioms (like the axiom of infinity).

Formalism (Hilbert): Mathematics is a formal game of symbol manipulation. The meaningfulness of mathematical statements is irrelevant; what matters is that the rules are consistent. Hilbert sought a finitary consistency proof for all of mathematics. Gödel showed this is impossible.

Intuitionism (Brouwer): Mathematics is a mental construction. A mathematical object exists only when it has been constructively built in the mind. The law of the excluded middle ($P \lor \neg P$) is rejected for infinite domains. Brouwer accepted only constructive proofs, which severely limited the mathematics that could be done.

The debate was fierce and sometimes personal. Hilbert declared, “Taking the principle of the excluded middle from the mathematician would be the same as prohibiting the astronomer from using a telescope or the boxer from using his fists.” Brouwer, in turn, maintained that Hilbert’s formalism was a betrayal of mathematical truth. The conflict between Hilbert and Brouwer — the Grundlagenstreit (foundational dispute) — grew so bitter that Hilbert used his influence to remove Brouwer from the editorial board of Mathematische Annalen in 1928.

20.2 Russell and Whitehead — Principia Mathematica

Bertrand Russell (1872–1970) and Alfred North Whitehead (1861–1947) embarked on the most ambitious project in the history of mathematical logic: to derive all of mathematics from a small set of logical axioms and inference rules. The result was Principia Mathematica, published in three massive volumes (1910, 1912, 1913), totaling nearly 2,000 pages.

To avoid Russell’s Paradox, they introduced type theory, a hierarchy of logical types: individuals are type 0, sets of individuals are type 1, sets of sets of individuals are type 2, and so on. A set of type $n$ can only contain elements of type $n-1$, preventing self-referential definitions.

The most famous moment in the work comes on page 379 of Volume I (after 362 pages of preparatory definitions and theorems), where Russell and Whitehead finally prove:

$$\ast 110.643 \;\;\vdash\;\; 1 + 1 = 2$$

followed by the dry remark: “The above proposition is occasionally useful.” This encapsulates both the extraordinary thoroughness and the ponderous nature of the logicist program.

Despite its monumental ambition, Principia ultimately fell short of its goals. The system required axioms — like the axiom of infinity (there exist infinitely many objects) and the axiom of reducibility — that are not purely logical. Moreover, Gödel would show in 1931 that no consistent formal system of this kind can be complete: there will always be true mathematical statements that the system cannot prove.

Nonetheless, Principia Mathematica remains a landmark achievement. It demonstrated that large swaths of mathematics can be formalized, it introduced ideas (type theory, lambda calculus notation) that would prove essential for computer science, and it set the stage for Gödel’s work by providing a precise target for his analysis.

20.3 Kurt Gödel (1906–1978)

Kurt Friedrich Gödel was born on 28 April 1906 in Brünn (now Brno), in the Austro-Hungarian Empire. As a child he was known as “Herr Warum” (“Mr. Why”) for his incessant questioning. He entered the University of Vienna in 1924, originally intending to study physics, but was drawn to mathematics and logic through the lectures of Hans Hahn and the stimulating atmosphere of the Vienna Circle, a group of philosophers, scientists, and mathematicians dedicated to logical positivism.

At age 23, Gödel completed his doctoral thesis (1929), which proved the completeness theorem for first-order predicate logic: every logically valid formula in first-order logic is provable. This was a positive result, confirming one aspect of Hilbert’s program for a restricted (but important) logical system. The completeness theorem states:

$$\text{If } \Gamma \models \phi, \text{ then } \Gamma \vdash \phi$$

(if $\phi$ is true in every model of $\Gamma$, then $\phi$ is provable from $\Gamma$). But just one year later, Gödel would prove that this pleasant state of affairs does notextend to systems powerful enough to express arithmetic.

Gödel — Key Dates

  • 1906 — Born in Brünn, Austria-Hungary
  • 1929 — Completeness theorem for first-order logic
  • 1931 — Publishes the incompleteness theorems
  • 1938 — Emigrates to the United States; joins the IAS at Princeton
  • 1940 — Proves the continuum hypothesis is consistent with ZFC
  • 1949 — Discovers rotating universe solutions to Einstein’s equations
  • 1978 — Dies in Princeton of self-starvation due to paranoia

Gödel fled Austria after the Anschluss of 1938, traveling across the Soviet Union and the Pacific to reach the United States. He joined the Institute for Advanced Study in Princeton, where he formed a remarkable friendship with Albert Einstein. The two were frequently seen walking together on the institute grounds, deep in conversation. Einstein said that in his later years, his own work meant little and he went to the institute “just to have the privilege of walking home with Gödel.”

Gödel suffered increasingly from paranoid delusions, particularly a fear of being poisoned. He would eat only food prepared by his wife Adele. When Adele was hospitalized in late 1977, Gödel essentially stopped eating and died on 14 January 1978 of “malnutrition and inanition caused by personality disturbance.” He weighed only 65 pounds (29 kg) at the time of his death.

20.4 Gödel Numbering

The key technical innovation of Gödel’s proof was the arithmetization of syntax — a method for encoding logical formulas, proofs, and even statements about the formal system as natural numbers within the system itself.

Gödel Numbering

A Gödel numbering assigns a unique natural number (the Gödel number) to every symbol, formula, and sequence of formulas (proof) in a formal system. The key properties are:

  1. The encoding is injective (different objects get different numbers).
  2. The encoding is computable (given a formula, we can calculate its number, and vice versa).
  3. Syntactic operations on formulas correspond to arithmetic operations on their Gödel numbers.

Gödel’s original encoding used prime factorization. Assign a number to each basic symbol: for example, $0 \mapsto 1$, $S \mapsto 3$ (successor), $+ \mapsto 5$, $= \mapsto 7$, $( \mapsto 9$, $) \mapsto 11$, and so on. Then the Gödel number of a formula like $S(0) = S(0)$ (which says $1 = 1$) is:

$$\ulcorner S(0) = S(0) \urcorner = 2^3 \cdot 3^9 \cdot 5^1 \cdot 7^{11} \cdot 11^7 \cdot 13^3 \cdot 17^9 \cdot 19^1 \cdot 23^{11}$$

(where each exponent is the code of the corresponding symbol, and the bases are successive primes). By the fundamental theorem of arithmetic, this encoding is unique and recoverable: given any Gödel number, we can factor it and read off the formula.

The stunning consequence is that a formal system capable of arithmetic can “talk about” its own formulas and proofs, because these are now numbers, and the system can express properties of numbers. For instance, the property “$x$ is the Gödel number of a formula” or “$x$ is the Gödel number of a proof of the formula with Gödel number $y$” can be expressed as arithmetic predicates. This self-referential capacity is the engine of Gödel’s proof.

Self-Reference via Gödel Numbering

The key step is constructing a formula that refers to itself. Using a technique called diagonalization, Gödel constructed a formula $G$ whose Gödel number encodes the statement:

“The formula with Gödel number $\ulcorner G \urcorner$ is not provable in $T$.”

In other words, $G$ says: “I am not provable.” This is the Gödel sentence.

20.5 The First Incompleteness Theorem (1931)

Gödel’s first incompleteness theorem is one of the most profound results in all of mathematics. Published in his 1931 paper “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I” (“On Formally Undecidable Propositions of Principia Mathematica and Related Systems I”), it shattered the dream of a complete axiomatization of mathematics.

Gödel’s First Incompleteness Theorem

Any consistent formal system $T$ that is capable of expressing elementary arithmetic (i.e., contains Peano arithmetic or a sufficiently strong fragment) is incomplete: there exists a sentence $G$ in the language of $T$ such that:

$$T \nvdash G \quad \text{and} \quad T \nvdash \neg G$$

Moreover, $G$ is true (in the standard model of arithmetic). Thus $T$ contains true statements that it cannot prove.

Outline of the proof: The proof proceeds in several stages.

Step 1: Arithmetization. Using Gödel numbering, encode all formulas and proofs of$T$ as natural numbers. Define the arithmetic predicate $\text{Prov}_T(x)$ meaning “$x$ is the Gödel number of a formula provable in $T$.”

Step 2: Diagonal Lemma. There exists a sentence $G$ such that:

$$T \vdash \bigl(G \leftrightarrow \neg\,\text{Prov}_T(\ulcorner G \urcorner)\bigr)$$

That is, $G$ is provably equivalent (within $T$) to the statement “I am not provable in $T$.”

Step 3: If $T$ is consistent, then $T \nvdash G$. Suppose $T \vdash G$. Then there exists a proof of $G$ in $T$, so $\text{Prov}_T(\ulcorner G \urcorner)$ is true and provable. But $G$ says $\neg\,\text{Prov}_T(\ulcorner G \urcorner)$, so $T$ also proves $\neg G$. This contradicts the consistency of $T$.

Step 4: If $T$ is $\omega$-consistent, then $T \nvdash \neg G$. Since $G$is not provable (Step 3), there is no proof of $G$, so for every number $n$, the system can prove “$n$ is not a proof of $G$.” If $T$ also proved $\neg G$ (which asserts “there exists a proof of $G$”), this would violate $\omega$-consistency.

Conclusion: $G$ is independent of $T$ — neither provable nor refutable. But$G$ is true in the standard model (since it says “I am not provable,” and indeed it is not provable). Hence there are true but unprovable statements in any sufficiently strong consistent formal system.

Rosser (1936) later strengthened the theorem to remove the $\omega$-consistency assumption, requiring only simple consistency. The theorem applies to Peano Arithmetic, ZFC set theory, any system extending them, and indeed to any recursively axiomatizable consistent system strong enough to represent all computable functions.

20.6 The Second Incompleteness Theorem

Gödel’s second incompleteness theorem is, if anything, even more devastating for Hilbert’s Program than the first.

Gödel’s Second Incompleteness Theorem

If $T$ is a consistent formal system containing elementary arithmetic, then $T$ cannot prove its own consistency. That is:

$$T \nvdash \text{Con}(T)$$

where $\text{Con}(T)$ is the arithmetic sentence asserting “$T$ is consistent” (i.e., $\neg\,\text{Prov}_T(\ulcorner 0 = 1 \urcorner)$).

Proof sketch: The proof of the first incompleteness theorem can be carried out within $T$ itself. Specifically, one can prove in $T$:

$$T \vdash \bigl(\text{Con}(T) \to G\bigr)$$

That is, $T$ can prove “if I am consistent, then $G$ is true.” But we know $T \nvdash G$(by the first theorem). So if $T$ could prove $\text{Con}(T)$, then by modus ponens it could prove $G$, which it cannot. Therefore $T \nvdash \text{Con}(T)$.

This was a direct blow to the heart of Hilbert’s Program, which sought a finitary proof of the consistency of mathematics. Since any such proof would have to be formalizable in a system at least as strong as arithmetic, Gödel’s second theorem says it cannot succeed. The system cannot vouch for its own reliability.

When Hilbert first learned of Gödel’s results, he was reportedly “somewhat angry.” But unlike some mathematicians who tried to find flaws in the proof, Hilbert eventually accepted the result and shifted to modified forms of his program. The field of proof theory, which Hilbert initiated, continues to this day, seeking relative consistency proofs (showing one system is consistent if another is) rather than absolute ones.

Gentzen’s Consistency Proof

In 1936, Gerhard Gentzen proved the consistency of Peano Arithmetic — but he had to use transfinite induction up to the ordinal $\varepsilon_0 = \omega^{\omega^{\omega^{\cdot^{\cdot^{\cdot}}}}}$, which goes beyond the finitary methods Hilbert envisioned. This is perfectly consistent with Gödel’s theorem: you can prove consistency, but only by using methods stronger than the system itself.

20.7 Consequences and Philosophical Implications

Gödel’s incompleteness theorems have profound consequences that extend far beyond mathematical logic:

  • Formalism is inherently limited: No single formal system can capture all mathematical truth. Mathematics is inexhaustible — there will always be true statements beyond the reach of any given set of axioms.
  • Truth transcends provability: The Gödel sentence $G$ is true but unprovable. This establishes a fundamental distinction between truth and formal provability that philosophers continue to debate.
  • No ultimate foundation: We can never achieve absolute certainty about the consistency of our mathematical foundations. We can only prove the consistency of weaker systems within stronger ones, leading to an infinite regress.
  • Independence phenomena: Gödel’s work opened the door to independence results in set theory. Gödel himself (1940) showed that the continuum hypothesis and the axiom of choice are consistent with ZFC. Cohen (1963) showed their negations are too. These are concrete examples of Gödelian incompleteness in mainstream mathematics.

The physicist and mathematician Roger Penrose has controversially argued that Gödel’s theorems show that human mathematical understanding cannot be fully captured by any algorithm or Turing machine. Since humans can “see” the truth of the Gödel sentence that the formal system cannot prove, Penrose argues, human minds are not simply running algorithms. This claim remains highly debated; critics point out that humans are not infallible and cannot necessarily “see” the truth of Gödel sentences for arbitrary systems.

20.8 Alan Turing (1912–1954)

Alan Mathison Turing was born on 23 June 1912 in London. From an early age he displayed extraordinary intellectual abilities and an independent, unconventional personality. He attended Sherborne School, where he was deeply influenced by his friend Christopher Morcom, whose sudden death in 1930 profoundly affected him. Turing entered King’s College, Cambridge, in 1931, where he was elected a Fellow at the remarkably young age of 22 on the basis of a dissertation proving the central limit theorem (independently of Lindeberg, who had proved it earlier).

Turing’s 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,”is one of the foundational documents of computer science. Written when he was just 23, it introduced the concept of a Turing machine and used it to prove that the Entscheidungsproblem is unsolvable. He then went to Princeton to study under Alonzo Church, who had independently obtained the same result using a different formalism (the lambda calculus).

Turing — Key Dates

  • 1912 — Born in London
  • 1936 — “On Computable Numbers” — Turing machines, halting problem
  • 1938 — PhD under Church at Princeton
  • 1939–45 — Bletchley Park: breaks the Enigma cipher
  • 1945 — Designs the ACE (Automatic Computing Engine)
  • 1950 — “Computing Machinery and Intelligence” — the Turing test
  • 1952 — Convicted of gross indecency (homosexuality); chemical castration
  • 1954 — Dies from cyanide poisoning (probable suicide), age 41

During World War II, Turing worked at Bletchley Park, the British codebreaking center. There he was instrumental in breaking the German Enigma cipher, designing the Bombe machines that automated the cryptanalysis. Historians estimate that Turing’s work shortened the war by two years and saved millions of lives. Yet his wartime contributions remained classified for decades.

After the war, Turing worked on computer design (the ACE at the National Physical Laboratory), wrote a foundational paper on artificial intelligence (“Computing Machinery and Intelligence,” 1950, which proposed the Turing test), and studied mathematical biology (reaction-diffusion equations for morphogenesis). His breadth was extraordinary.

In 1952, Turing was prosecuted for homosexuality, which was then a criminal offense in Britain. He was convicted and subjected to chemical castration through injections of synthetic estrogen. On 7 June 1954, he was found dead from cyanide poisoning, with a half-eaten apple beside him. The inquest ruled suicide, though some have speculated it was an accident. In 2013, Queen Elizabeth II granted Turing a posthumous royal pardon.

20.9 Turing Machines

Turing’s great insight was to give a precise mathematical definition of what it means for a function to be “computable” or for a problem to be “effectively decidable.” He did this by imagining the simplest possible computing device that could still carry out any computation.

Turing Machine — Formal Definition

A Turing machine consists of:

  1. An infinite tape divided into cells, each containing a symbol from a finite alphabet $\Gamma$ (including a blank symbol $\sqcup$).
  2. A head that can read and write one symbol at a time and move left (L) or right (R).
  3. A finite set of states $Q$, including a start state $q_0$ and halt states $q_{\text{accept}}$ and $q_{\text{reject}}$.
  4. A transition function $\delta: Q \times \Gamma \to Q \times \Gamma \times \{L, R\}$ that determines, for each state and tape symbol, the next state, the symbol to write, and the direction to move.

Formally, a Turing machine is a 7-tuple $M = (Q, \Sigma, \Gamma, \delta, q_0, q_{\text{accept}}, q_{\text{reject}})$ where $\Sigma \subseteq \Gamma$ is the input alphabet.

A Turing machine computes a function $f: \Sigma^* \to \Sigma^*$ if, when started on a tape containing input $w$, it eventually halts with $f(w)$ on the tape. A function is Turing-computable if some Turing machine computes it.

One of Turing’s most remarkable constructions was the universal Turing machine $U$. This is a single fixed Turing machine that can simulate any other Turing machine. Given as input the description (encoding) of a machine $M$ and an input $w$, the universal machine $U$ produces the same output as $M$ would on $w$:

$$U(\langle M \rangle, w) = M(w)$$

This is the theoretical foundation of the stored-program computer: a single machine that can execute any program. Every modern computer is a physical realization of Turing’s universal machine.

The Church–Turing Thesis

Every effectively computable function is Turing-computable. This is not a mathematical theorem but a thesis (hypothesis) about the nature of computation. It is supported by the remarkable fact that every other reasonable model of computation — lambda calculus (Church), recursive functions (Gödel–Herbrand), Post systems, register machines, cellular automata, quantum computers (for computability, not speed) — computes exactly the same class of functions.

20.10 The Halting Problem

The halting problem asks: given a description of a Turing machine$M$ and an input $w$, determine whether $M$ will eventually halt on input $w$, or run forever. Turing proved that no algorithm can solve this problem in general.

Undecidability of the Halting Problem (Turing, 1936)

There is no Turing machine $H$ that, for every Turing machine $M$ and input $w$, correctly decides:

$$H(\langle M \rangle, w) = \begin{cases} 1 & \text{if } M \text{ halts on } w \\ 0 & \text{if } M \text{ runs forever on } w \end{cases}$$

Proof by contradiction: Suppose such a machine $H$ exists. Construct a new machine $D$ (the “diagonal” machine) as follows:

On input $\langle M \rangle$ (the encoding of any Turing machine $M$):

  1. Run $H(\langle M \rangle, \langle M \rangle)$ — that is, ask whether $M$ halts when given its own description as input.
  2. If $H$ says $M$ halts, then $D$ enters an infinite loop (does not halt).
  3. If $H$ says $M$ does not halt, then $D$ halts immediately.

Now ask: does $D$ halt on input $\langle D \rangle$?

  • If $D$ halts on $\langle D \rangle$, then $H(\langle D \rangle, \langle D \rangle) = 1$, so by construction $D$ loops forever. Contradiction.
  • If $D$ does not halt on $\langle D \rangle$, then $H(\langle D \rangle, \langle D \rangle) = 0$, so by construction $D$ halts. Contradiction.

In both cases we reach a contradiction. Therefore the assumed machine $H$ cannot exist. The halting problem is undecidable. $\blacksquare$

Note the structural similarity to Cantor’s diagonal argument (proving the uncountability of the reals) and to Russell’s paradox. This is no coincidence: all three are instances of the general technique of diagonalization, which pervades mathematical logic and computability theory.

20.11 The Entscheidungsproblem — Solved

The Entscheidungsproblem (“decision problem”) was posed by Hilbert and Ackermann in 1928: is there an algorithm that takes as input a statement in first-order logic and determines whether it is universally valid (true in every interpretation)?

In 1936, both Turing and Alonzo Church independently proved that the answer is no. Church used his lambda calculus to show that the problem is unsolvable; Turing used his machines. Both proofs are fundamentally about the same phenomenon: there are well-defined mathematical questions that no mechanical procedure can settle.

The connection to Gödel’s work is deep. Gödel showed that certain true statements are unprovable; Turing and Church showed that certain problems are undecidable — there is no algorithm to determine membership. These are related but distinct concepts:

Incompleteness vs. Undecidability

  • Gödel (1931): There exist true arithmetic statements that no fixed formal system can prove. (Limits of axiom systems.)
  • Church–Turing (1936): There exist well-posed yes/no questions that no algorithm can answer in general. (Limits of computation.)

Both reveal fundamental limitations, but they operate at different levels. Gödel’s result is about proof within a fixed system; Church and Turing’s result is about algorithmic decidability regardless of what axioms one adopts.

Remarkably, these results resolved the last part of Hilbert’s Program in the negative. The dream of a single decision procedure for all of mathematics was dead. The three pillars of Hilbert’s vision — completeness, consistency, decidability — had all been knocked down:

  • Completeness: Impossible (Gödel’s first theorem).
  • Provable consistency: Impossible from within (Gödel’s second theorem).
  • Decidability: Impossible (Church–Turing).

20.12 Computability and Complexity

While Turing’s work established the boundary between computable and non-computable, a new question emerged in the 1960s and 1970s: among the problems that are computable, which ones can be solved efficiently? This led to computational complexity theory.

P and NP

P is the class of decision problems solvable by a deterministic Turing machine in polynomial time: there exists a polynomial $p(n)$ such that the machine halts in at most $p(|w|)$ steps on any input $w$ of length $|w|$.

NP is the class of decision problems for which a “yes” answer has a certificate (proof) that can be verified in polynomial time. Equivalently, NP is the class solvable by a nondeterministic Turing machine in polynomial time.

Clearly $\text{P} \subseteq \text{NP}$: if you can solve a problem efficiently, you can certainly verify a solution efficiently. The great open question is:

Does P = NP?

This is one of the seven Clay Millennium Prize Problems, with a $1,000,000 reward. Most experts believe $\text{P} \neq \text{NP}$, but proving it has resisted all attempts.

In 1971, Stephen Cook proved that the Boolean satisfiability problem (SAT) is NP-complete: it is in NP, and every other NP problem can be reduced to it in polynomial time. This means that if SAT could be solved in polynomial time, then every NP problem could be. Karp (1972) then showed 21 other natural problems are NP-complete, including graph coloring, Hamiltonian cycle, and subset sum.

Cook’s Theorem (1971)

SAT is NP-complete. That is:

  1. SAT is in NP (a satisfying assignment can be verified in polynomial time).
  2. For every problem $L \in \text{NP}$, there exists a polynomial-time reduction $L \leq_p \text{SAT}$.

Consequence: $\text{SAT} \in \text{P} \iff \text{P} = \text{NP}$.

Thousands of NP-complete problems have been identified across combinatorics, graph theory, scheduling, cryptography, and biology. If $\text{P} = \text{NP}$, most modern cryptography would be broken; optimization problems in logistics, AI, and biology would become tractable; and the nature of mathematical creativity itself would be called into question (since finding proofs would be as easy as verifying them).

20.13 Legacy — Foundations of Computer Science and the Limits of Knowledge

Gödel and Turing, in different but deeply connected ways, established the limits of what formal systems and algorithms can achieve. These limits are not failures but are among the most profound discoveries in intellectual history.

Gödel showed that mathematical truth outstrips formal proof: any attempt to capture all of mathematics in a single formal system is doomed to leave something out. This does not mean mathematics is broken — it means mathematics is inexhaustible. There will always be new truths to discover, new axioms to explore, new questions that current methods cannot answer.

Turing showed that computation has absolute limits: some problems are intrinsically unsolvable by any mechanical procedure. At the same time, by defining computation precisely, he laid the theoretical foundation for the digital revolution. The universal Turing machine is the conceptual ancestor of every laptop, smartphone, and server in the world.

Together, their work created the field of mathematical logic as we know it today, gave birth to theoretical computer science, and profoundly shaped philosophy of mind, linguistics (formal language theory), and even biology (computational models of the brain). The questions they raised — What can be known? What can be computed? What are the limits of reason? — remain as vital and as deep as ever.

The ACM Turing Award, the highest honor in computer science (often called the “Nobel Prize of computing”), is named after Alan Turing. Gödel’s incompleteness theorems are regularly cited as among the most important results in the history of mathematics, alongside the work of Euclid, Newton, and Einstein.

Rate this chapter: