Exactly when and why can a Turing-machine not solve the halting problem?

I think the following portion from your question is most important:

But, the halting-detection machines used by the Busy Beaver researchers don't have too much power. They have too little power. Currently they can't solve $n=5$. OK, so we give them some more power. Maybe at some point they can solve $n=5$ ... but they still can't solve $n=6$. Maybe we can give them enough power to solve $n=6$, or $n=7$

or ....

... so my question is: is there some 'special' value of $n$, say $n=m$ where this has to stop. Where, somehow, the only way to solve $n=m$, is by a machine that has 'too much' power? But why would that be?

The solution to solving $\Sigma(5)$ isn't simply giving Turing machines "more power." The reason we don't know $\Sigma(5)$ right now is because there are 5-state Turing machines which mathematicians believe will never halt, but have not been able to prove will never halt. The problem is not as simple as just enumerating through all of the 5-state Turing machines since once you've done that, you still need to figure out if the Turing machine halts or not, which, as you know, is not a trivial problem. We've been able to do this for 4-state Turing machines, but we don't know yet if we can do this for 5-state Turing machines because there may very well be 5-state Turing machines which we can never prove to be halting/non-halting within the system of classical mathematics (that is, ZFC).

Now, you've asked what is the magic number is: What is the magic number $n=m$ such that we will never be able to solve for $\Sigma(n)$? As I've said above, that magic number could very well be $n=5$, but that hasn't been proven yet. However, mathematicians have proven that $\Sigma(1919)$ is indeed un-knowable within ZFC. Before explaining this, I think it might be helpful to quote your question again:

Then again, these machine don't do any explicit analysis of the machines involved ... they just happen to give the right value. So, maybe there is still some value of $n$ where the approach of actually trying to analyze and predict the behavior of machine starts to break down for some fundamental, again possibly self-referential, reason?

My answer to this question is yes, there is a 1919-state Turing machine such that trying to predict if the machine halts would be fundamentally unsolvable within our system of mathematics. See, the way mathematicians proved $\Sigma(1919)$ is unsolvable is by constructing a 1919-state Turing machine $M$ which halts if there is a contradiction within ZFC and never halts if ZFC is consistent. However, there's no way to prove ZFC is consistent using the axioms of ZFC because of Godel's Second Incompleteness Theorems. This means we can never use the ZFC axioms of mathematics to prove machine $M$ ever halts or not because doing so would constitute a proof that ZFC is consistent. Thus, mathematicians can't predict if machine $M$ halts or not because of Godel's Incompleteness Theorem, which means that the busy-beaver problem for 1919-state Turing machines is unsolvable.

I hope this gives you some more insight into why $\Sigma(n)$ is solvable for small values of $n$ but unsolvable for larger values of $n$. Anyway, I am certainly not an expert in theory of computation, so if someone would like to add an alternate explanation/clarifying comments to my answer, feel free. Thanks!


Since, as you observe, any finite amount of the halting problem - that is, any set of the form $H\upharpoonright s:=\{x<s:\Phi_x(x)\downarrow\}$ - is computable, there isn't any particular sharp impossibility point. There are some interesting "phase transitions" which appear relevant - e.g. at a certain point we hit our first universal machine - but I don't know of any which have any claim to be the point where the halting problem becomes noncomputable.

On the other hand, as you also observe the way in which the $H\upharpoonright s$s are computable is non-uniform (otherwise, the whole halting problem would be computable). So we can try to measure this "ongoing complexity." There are two, to my mind, natural approaches:

  • Given $n$, how up up the "hierarchy of theories" - from fragments of PA, to fragments of $Z_2$, to fragments of ZFC, to ZFC + large cardinals - do we have to go to get a theory which can decide whether each of the first $n$ Turing machines halts on input $0$?

  • Given $n$, how complicated is the finite string consisting of the first $n$ bits of the characteristic function of the halting problem (call this string "$\eta_n$")?

Of these two approaches, the first has some draw which the second lacks, but it's also far more vague and limited. The second winds up leading to a very rich theory, namely the theory of Kolmogorov complexity (and its attendant notions, like algorithmic randomness), and also partially subsumes the former question. So I think that's my answer to your question: ultimately you won't find a sharp transition point, but the study of the dynamic behavior of the complexity of the halting problem is quite rewarding.


For example, you can construct a Turing machine (I don't know how many states you need, but it is a finite number) that searches for a counterexample to Goldbach's conjecture, i.e. an even number $> 2$ that is not the sum of two primes, going through the even numbers one by one; for even number $n > 2$ it checks each $k$ from $2$ to $n/2$; if $k$ is prime and $n-k$ is prime it goes to the next $n$, but if it gets through all the $k$ it halts. Thus this Turing machine will halt if and only if Goldbach's conjecture is false. In order to decide whether it will halt, your analysis will need to decide Goldbach's conjecture.

And when you're done with that one, there are lots of other conjectures you could check with a Turing machine.


A potential Busy Beaver has three possibilities:

  1. It is easy to show it stops
  2. It is easy to show it never stops
  3. It is difficult to show either case

Number 1 either stops quickly, or it has a repeating pattern with an eventual flaw that causes it to stop.

Number 2 has a repeating pattern and never has a flaw, causing it to continue forever.

Number 3 is the interesting case. Its behaviour is chaotic. It might have short-term patterns, but it has no long-term patterns. Its future states can be predicted a short way without actually running the machine. There comes a certain point where its behaviour can no longer be predicted without running it. At this point you need a machine capable of emulating a Turning machine but faster. However, you will reach the same problem with this hypothetical new machine too, as long as it has finite power, which all real-world machines have.

If you improve your analysis of Busy Beavers, you can decide whether certain candidates are actually case 1 or case 2. We can think of it as a spectrum of behaviour with stopping at 0, running forever at 2, and undecidability at 1. To start with we know that 0 to 0.5 will stop (case 1) and 1.5 to 2 will run forever (case 2), with 0.5 to 1.5 being undecidable without running it (case 3). We improve our understanding of Busy Beavers. Now we know that 0 to 0.95 will stop and 1.05 to 2 will run forever, with 0.95 to 1.05 being undecidable.

At some point, there's no way to predict without running the machine whether it will halt or not. The only way to determine its behaviour is to run it, and if it stops, it usually gives you no insight you can use for other potential Busy Beavers. If it doesn't stop after $10^6$ cycles, you can try $10^7$, then $10^8$, and so on. At some point you give up without knowing whether it will stop if given enough time.

The problem is similar to drawing a Mandelbrot set. Certain points "escape" to infinity quickly, others settle into a repeating pattern quickly. The points on the boundary of the Mandelbrot set are difficult to predict either way without calculating them. There are methods to make the decision easier, but there will always be chaotic behaviour between the two easily detectable outcomes that can't be predicted.

Hopefully I've answered "why". Then "when" will be determined by your understanding of the specific problem you're trying to solve and the power of the machine you're using. Over time we can eat into the chaos and make it decidable, but it grows much faster than we can ever solve.