In practice, what does it mean for the Newton's method to converge quadratically (when it converges)?

A great example is to do what your calculator does when it computes $\sqrt{x}$ for some number. This way, you can see quadratic convergence in practice.

Say you want to figure out $\sqrt{17}$, this is really equivalent to finding zero's of the polynomial $f(x)=x^{2}-17$ given some initial guess.

Let's try it. Given an initial guess say, 4.2 ($4^{2}=16$ too small, and $5^2=25$, too big), we use newton's method to find the root.

$x_1=4.2-\frac{(4.2)^2-17}{8.4}=4.12380952380952381$ With error from the calculators output of $\sqrt{17}$of $|4.12380952380952380-4.12310562561766054982|=0.0007$

Let's run it again and see what happens to our error: $x_2=4.12380952380952381-\frac{(4.12380952380952381)^2-17}{2(4.12380952380952381)}=4.1231056856922907731$ with a new error of: $|4.1231056856922907731-4.12310562561766054982|\sim 6*10^{-8}$

So after one iteration we went from an error on the order of $10^{-4}$, to one of $(10^{-4})^2$, showing our error shrinking quadratically. This is really really fast, and why your calculator still uses this method.


Let $\epsilon_n=|x_n-L|$ be the error on the $n$th step. Supposing the claimed limit does indeed hold, this means

$$\frac{\epsilon_{n+1}}{\epsilon_n^2}\sim\mu\implies\epsilon_{n+1}\simeq\mu\epsilon_n^2$$

i.e. the error on the next step is roughly a quadratic function of the error on the current step (hence the name). Similarly for linear convergence,

$$\epsilon_{n+1}\simeq\mu\epsilon_n$$

is a linear function of the current error.

Visually, however, one cannot easily "see" the value of $\epsilon_n$, but rather $\log(\epsilon_n)$, the number of digits accurate. Taking the logarithm of both sides, one may see that quadratic convergence implies

$$\log(\epsilon_{n+1})\simeq2\log(\epsilon_n)+\log(\mu)\simeq2\log(\epsilon_n)$$

meaning that the amount of digits accurate per iteration will double. You may obtain the order of convergence by observing the limit

$$\frac{\log(\epsilon_{n+1})}{\log(\epsilon_n)}\to2\quad(\text{or 1 for linear convergence})$$

with the additional caveat that the second term, $\log(\mu)$, is bounded (and less than $0$ for linear convergence), though usually if it isn't one doesn't worry too much about it.

For example, let us estimate $\sqrt2$ using Newton's method starting from $x_0=1$.

\begin{array}{c|c} n&x_n\\\hline 0&1.000000000\\ 1&1.500000000\\ 2&1.\underline{41}6666666\\ 3&1.\underline{41421}5686\\ 4&1.\underline{414213562}\\ 5&1.414213562 \end{array}

It is easy to visually see that the number of digits accurate will roughly double per iteration.


If your question concerns why Newton's method (usually) converges quadratically, the proof follows from a quick Taylor expansion:

$$f(L)=f(x)+f'(x)(L-x)+\underbrace{\frac12f''(\xi)(L-x)^2}_{\mathcal O(\epsilon_n^2)}$$

Letting $f(L)=0$ and solving for $L$ from the second term leads to

$$L=\underbrace{x-\frac{f(x)}{f'(x)}}_{x_\mathrm{newton}}-\underbrace{\frac12\frac{f''(\xi)}{f'(x)}(L-x)^2}_{\mathcal O(\epsilon_n^2)}$$

which shows that

$$\epsilon_{n+1}=|L-x_\mathrm{newton}|=\left|\frac12\frac{f''(\xi)}{f'(x)}(L-x)^2\right|\simeq\mu\epsilon_n^2$$