What breaks the Turing Completeness of simply typed lambda calculus?

On the Wikipedia page about Turing Completeness, we can read that:

Although (untyped) lambda calculus is Turing-complete, simply typed lambda calculus is not.

I am curious as to what exactly makes a simply typed lambda calculus not Turing Complete? Is it only related to the fact that simply typed lambda calculus is strongly normalizing and can thus solve its own Halting Problem trivially?

As a bonus it would be terrific if anyone could provide me with an example of computable function that cannot be defined in simply typed lambda calculus.


Solution 1:

Nature of incompleteness

A computational system is said to be Turing complete if the system can be used to simulate an arbitrary Turing machine on an arbitrary input string. Since $\lambda^{\rightarrow}$ is strongly normalizing, every well-typed program is guaranteed to reduce to an irreducible normal form (which is to say, every program halts). As such, we couldn't possibly write a $\lambda^{\rightarrow}$ program to simulate a Turing machine on an input for which the machine never halts; all $\lambda^{\rightarrow}$ programs halt and such a Turing machine on such an input, trivially, does not.

Less formally, simulating arbitrary Turing machines on arbitrary inputs means you need some form of conditional choice (so as to emulate the tape head's ability to move along the tape), and some means of storing and manipulating an arbitrary number of memory locations (so as to emulate the tape). For $\lambda^{\rightarrow}$, typing rules are sufficient to encapsulate the notion of choice, but we can't operate over arbitrary amounts of memory because types can't recurse.

Counterexamples to typability

As far as things that can't be expressed using $\lambda^{\rightarrow}$, a classic example would be fixed point operators. Consider for instance Y combinator, which is defined as $λf.(λx. f(xx))(λx. f(xx))$. Notice that within $(xx)$, we cannot assign a unique type to both occurrences of $x$.

An answer to a similar question on the CSTheory site suggests a more clever example: the function which takes a computable function argument and returns its Gödel number. In this case again, self-reference plays a central role and using a diagonal argument one can prove that a simply-typed term for such a function cannot exist.