What philosophical consequence of Goedel's incompleteness theorems?

I want to write a philosophical essay centered about Goedel's incompleteness theorem. However I cannot find any real philosophical consequences that I can write more than half a page about. I read the books of Franzen (Incomplete guide of its use and abuse) and Peter Smith (Introduction to Goedel's Theorems). I really cannot find any philosophical discussion topic which which is really a consequence of the incompleteness theorems. I tried the mind vs. machines debate (e.g. http://users.ox.ac.uk/~jrlucas/mmg.html) a little, but one can find to many arguments against the proposition that Goedel's incompleteness theorems make statements in this debate (as in Franzen's book).

So I would be grateful if someone could direct me into interesting philosophical (or mathematical) implications or further directions I could write about.


I think you might like to read a great recent paper by Scott Aaronson called "Why Philosophers Should Care About Computational Complexity". It covers a wide range of topics in philosophy that have been dramatically changed not just by computability but also by complexity theory.

It discusses a few points about Godel. In particular there is a great (but not well-known) letter mentioned in it from Godel to Von Neumann in which Godel essentially anticipates the whole P vs. NP idea and what its ramifications would be on human mathematics if P happened to actually be equal to NP.

Another recent paper that uses Godel's theorems in a very technical way to address a philosophical problem is "The Surprise Examination Paradox and the Second Incompleteness Theorem" by Kritchman and Raz.

In it, they take the classic example of an exam that will be given next week, but you won't be able to know the day of the exam ahead of time (it's also often re-phrased in terms of an execution next week, but you won't know the day of the execution; this is how it is described at Wikipedia).

There is a very naive "resolution" to this paradox using backward induction. Kritchman and Raz give a cool argument that basically claims that it all hinges on what you mean by "to know the day of the exam ahead of time." It turns out that if you mean "be able to prove the exam won't be tomorrow," then Godel's theorem actually lets you escape the backward induction and hence the seemingly paradoxical set-up doesn't have to be paradoxical at all.

Also, a very very important place where Godel's theorem was invoked is in Roger Penrose's book "The Emperor's New Mind." Penrose's main argument is that brains cannot be given a fully reductionist explanation in terms of currently understood physics because there's just something about a human mathematician that can somehow "see" the consistency of the mathematician's own "formal system" which ought to be prevented by Godel's theorem if our brains were just formal systems in the sense of Turing machines / Church-Turing thesis. And hence, Penrose rejects the plausibility of Strong A.I., pending the discovery of something like quantum gravitational effects in a brain (which he asserts we wouldn't be able to engineer or harness for the A.I. part).

I believe Robin Hanson wrote up an excellent rebuttal to Penrose's highly speculative use of Godel's theorem (link). Here's just a brief quote from that rebuttal:

"Penrose gives many reasons why he is uncomfortable with computer-based AI. He is concerned about "the 'paradox' of teleportation" whereby copies could be made of people, and thinks "that Searle's [Chinese-Room] argument has considerable force to it, even if it is not altogether conclusive." He also finds it "very difficult to believe ... some kind of natural selection process being effective for producing [even] approximately valid algorithms" since "the slightest 'mutation' of an algorithm ... would tend to render it totally useless."

These are familiar objections that have been answered quite adequately, in my opinion. But the anti-AI argument that stands out to Penrose as "as blatant a reductio ad absurdum as we can hope to achieve, short of an actual mathematical proof!" turns out be a variation on John Lucas's much-criticized "Godel" argument, offered in 1961.

A mathematician often makes judgments about what mathematical statements are true. If he or she is not more powerful than a computer, then in principle one could write a (very complex) computer program that exactly duplicated his or her behavior. But any program that infers mathematical statements can infer no more than can be proved within an equivalent formal system of mathematical axioms and rules of inference, and by a famous result of Godel, there is at least one true statement that such an axiom system cannot prove to be true. "Nevertheless we can (in principle) see that P_k(k) is actually true! This would seem to provide him with a contradiction, since he aught to be able to see that also."

This argument won't fly if the set of axioms to which the human mathematician is formally equivalent is too complex for the human to understand. So Penrose claims that can't be because "this flies in the face of what mathematics is all about! ... each step [in a math proof] can be reduced to something simple and obvious ... when we comprehend them [proofs], their truth is clear and agreed by all."

And to reviewers' criticisms that mathematicians are better described as approximate and heuristic algorithms, Penrose responds (in BBS) that this won't explain the fact that "the mathematical community as a whole makes extraordinarily few" mistakes.

These are amazing claims, which Penrose hardly bothers to defend. Reviewers knowledgeable about Godel's work, however, have simply pointed out that an axiom system can infer that if its axioms are self-consistent, then its Godel sentence is true. An axiom system just can't determine its own self-consistency. But then neither can human mathematicians know whether the axioms they explicitly favor (much less the axioms they are formally equivalent to) are self-consistent. Cantor and Frege's proposed axioms of set theory turned out to be inconsistent, and this sort of thing will undoubtedly happen again."

As a final aside, I think the Aaronson paper linked above does a superb job of synthesizing the complexity-theory reasons why the Chinese Room argument totally fails. It's just a nerd interest, but something perhaps others here will appreciate.


I don't think it's quite right that Gödel's incompleteness theorems have had no philosophical consequences -- but the consequences have been ones of taking away rather than adding to philosophy. It's not that there are any (or many) interesting things that are thought now which would not have been thought without Gödel. But there are things that used to be thought but now aren't, due to Gödel's theorems.

In particular, consider the question: How can we be sure something is true just because we see a mathematical proof of it? That used to be a sort of meaningless non-question. (If there's a proof it must be true, because that's what proofs are for. You smokin' something?) It became a more urgent (and real) question during the 19th century, with the growing emphasis on rigor in analysis and in particular the discovery that non-euclidean geometry is consistent.

Around 1900 a common hope among leading mathematicians appears to have been that this question could be put to rest conclusively by finding a mathematical proof for a theorem saying that mathematical proofs are always trustworthy. This idea is generally known as Hilbert's program. The program died when Gödel's second incompleteness theorem showed that such a proof is impossible.

Now, the impossibility of mathematics pulling itself up by its bootstrap is not (in my opinion) itself a philosophical consequence. But what I think is interesting is that people used to think that the program was meaningful at all.

When I read about Hilbert's program today, my immediate reaction is something like: So what? Even if a proof that mathematics is trustworthy could be found -- imagine that we hadn't heard of Gödel and didn't know that such a proof cannot exist -- why would we be prepared to believe that proof in the first place? Because proofs are to be believed in general? But that's what we're trying to establish! It would be a circular argument, like arguing that [insert title of holy book] must be the inerrant word of God simply because it itself claims to be.

So I, today, wouldn't necessarily believe a self-proof that mathematics works, even if it turned out that Gödel had made a mistake somewhere and an actual self-proof was found. However, Hilbert and his followers until 1931 evidently (if my secondary sources are to be believed) thought that such a proof would be worth something, and could convince someone about something meaningful. The more I think about that viewpoint, the more alien does it feel to me.

How could they think like that? It's not as if Hilbert or those who followed his program were in any way stupid. And why can't I think like that? It's at least a natural hypothesis that the reason this kind of reasoning sounds nonsensical to us today is due to 80 years of accumulating influence of Gödel's results.


Godel incompleteness is the most famous, but I think that from a modern perspective it is Turing's notion of incompleteness that is more consequential, philosophically and mathematically.

Turing showed that there is a universal computer. As an immediate consequence of this, there are undecidable problems (such as the halting problem).

Godel incompleteness is nothing more or less than the statement that the evolution of a universal computer can be encoded arithmetically (using $+$ and $\times$). This is ultimately a very technical point, and many people who talk about Godel incompleteness so casually would have no idea how this encoding works.