Why does the Euclidean algorithm always terminate?

Why does the Euclidean algorithm always terminate? Can we make this effective by bounding the number of steps it takes in terms of the original integers?


Solution 1:

It always terminates because at each step one of the two arguments to $\gcd({}\mathbin\cdot{},{}\mathbin\cdot{})$ gets smaller, and at the next step the other one gets smaller. You can't keep getting smaller positive integers forever; that is the "well ordering" of the natural numbers. As long as neither of the two arguments is $0$ you can take it one more step, but it can't go on forever, so you have to reach a point where one of them is $0$, and then it stops.

As for bounds, a very crude and easily established upper bound on the number of steps is the sum of the two arguments. One of the arguments is reduced by at least $1$ at each step, and you can't reduce $n$ repeatedly by $1$ more than $n$ times without bringing it to $0$.

The worst case is $\gcd(m,n)$ where the ratio of $m$ to $n$ is the ratio of two consecutive Fibonacci numbers. For now I'll leave the proof of that as an exercise.

Solution 2:

This is a case of "infinite descent".

An iteration of the algorithm transforms a pair $(a,b)$ with $a>b\ge0$ into another pair $(a',b')$ with $a'>b'\ge0$, and also $a'<a$ (not $a'\le a$). So unavoidably you transform a problem in another problem of the same type with smaller arguments, and you will reach $0$ after a finite number of steps.

For a discussion of the number of steps, see https://en.wikipedia.org/wiki/Euclidean_algorithm#Number_of_steps.

Solution 3:

Yes there is a bound. It is used in computational mathematics. If $a$ and $b$ are integers ($a\ge b\ge1$) and the euclidean alg. required $n+1$ operations then you have $n+1 \lt 5\log_{10}b$. Moreover the algorithm must terminate in a finite number steps because in each step you have a remainder that is strictly less than its predecessor. So if it don't terminate then the set of all remainders will not have the minimal element, but this is an absurdum.(We are in $\mathbb N$).

Solution 4:

If you take two steps on the Euclidean algorithm, you have halved the size of the larger number.

$$\begin{align} a,b &\to b, c \;(\equiv a \bmod b)\\[3ex] b\le a/2 &\implies c<b \le a/2 \\ b>a/2 &\implies c=a-b < a/2\\[3ex] b,c &\to c,d\;(<c) \quad\square \end{align}$$

So the process terminates in at most $2\log_2 a$ steps.