What is wrong with this naive approach to Hilbert's 10th problem?
Background: I have been studying some decidability results in number theory. In doing so, I have always assumed there was no need for me to study pedantic definitions of decidability using Turing machines or similar, since the naive idea of an algorithm ('something which you could program a computer to do') seemed to be enough to understand most proofs. I now notice that this intuitive understanding might lead me to false conclusions. /Background
Consider the following problem: to find an algorithm which - on input a polynomial with coefficients in $\mathbb{Z}$ and an arbitrary number of variables - outputs YES if and only if the polynomial has an integer root, NO otherwise (Hilbert's 10th problem).
By a theorem of Matiyasevich, this problem is unsolvable, in that such an algorithm does not exist. The following naive approach of mine would imply that such an algorithm does exist:
- Clearly if a polynomial $f \in \mathbb{Z}[X_1, \ldots, X_n]$ has a root $\bar{z} = (z_1, \ldots, z_n) \in \mathbb{Z}^n$ then there is a proof for this in the theory of arithmetic (axiomitized by the Peano axioms, for example). Indeed: just calculate $f(\bar{z})$ and note that this equates to zero. It follows by Gödel's Completeness Theorem that also if $f$ has no root in $\mathbb{Z}^n$, there is a proof of this fact in the first-order theory of arithmetic.
- Given a finite sequence of symbols from the language of arithmetic (logical symbols, $+, -, \cdot, 0, 1$ and a symbol $\downarrow$ indicating that a new line should begin), there exists an algorithm which checks whether this sequence is a valid proof in the theory of arithmetic.
- The number of finite sequences of symbols in the language of arithmetic is countable. Even stronger: a bijection $\varphi$ from $\mathbb{N}$ to this set can be constructed explicitly as the number of symbols is finite, so we can just list all sequences with only 1 symbol, then all sequences with 2 symbols and so on.
- Now define the following algorithm: for $m = 1,2,3, \ldots$, verify whether $\varphi(m)$ is a valid proof. If so, check if the last line spells "$f$ has a root in $\mathbb{Z}^n$" or "$f$ has no root in $\mathbb{Z}^n$. If so, output YES or NO accordingly. If not, continue with $m+1$ instead of $m$.
- By my first point, the above algorithm always terminates.
Where did I make a mistake, i.e. what part of my argument violates the strict definition of what an algorithm is? Unless I am overlooking something, it seems like something I could make a computer do if my programming skills would allow it.
The problem is that it's possible $f$ has no integer roots, but there is no proof of this fact (in whatever theory of arithmetic you are using). You're right that if $f$ does have a root, then you can prove it by just plugging in that root. But if $f$ does not have a root, that fact need not be provable. In that case, your algorithm will never halt.