Minimum number of iterations in Newton's method to find a square root

I am writing an algorithm that evaluates the square root of a positive real number $y$. To do this I am using the Newton-Raphton method to approximate the roots to $f(x)=x^2-y$. The $n^{th}$ iteration gives $$x_n=\frac{x_{n-1}^2+y}{2x_{n-1}}$$ as an approximation to $\sqrt{y}$. I found that starting with an initial guess $x_0=1$ works pretty well generally, so an answer to the question below that assumes $x_0=1$ is fine.

My question: is there an exact expression for the minimum $N$ of iterations needed to attain a given precision $p$ in the approximate solution $x_N$? In other words I'm looking for the smallest integer $N$ such that $$\left|\frac{x_N-\sqrt y}{\sqrt y}\right|<p.$$

I've thought about this for a while and played around with the expression for the errors $\epsilon_n = x_n - \sqrt y$ which can be shown to satisfy $\epsilon_{n+1}=\epsilon_n^2\,/\,2x_n$, but I can't find an answer. I've looked around on Google but I couldn't find an answer either.

Any pointers to a solution online or help would be much appreciated. A follow-up would of course be: can $x_0$ be optimised (while being a simple enough expression in terms of $y$) in order to minimise $N$?


Solution 1:

There is such a formula: consider

$$\frac{x_n+\sqrt y}{x_n-\sqrt y}=\frac{\frac{x_{n-1}^2+y}{2x_{n-1}}+\sqrt y}{\frac{x_{n-1}^2+y}{2x_{n-1}}-\sqrt y}=\frac{(x_{n-1}+\sqrt y)^2}{(x_{n-1}-\sqrt y)^2}=\left(\frac{x_{n-1}+\sqrt y}{x_{n-1}-\sqrt y}\right)^2.$$

By recurrence,

$$\frac{x_n+\sqrt y}{x_n-\sqrt y}=\left(\frac{x_{0}+\sqrt y}{x_{0}-\sqrt y}\right)^{2^n}.$$

If you want to achieve $2^{-b}$ relative accuracy, $x_n=(1+2^{-b})\sqrt y$,

$$2^n=\frac{\log_2\frac{(1+2^{-b})\sqrt y+\sqrt y}{(1+2^{-b})\sqrt y-\sqrt y}}{\log_2\left|\frac{x_{0}+\sqrt y}{x_{0}-\sqrt y}\right|},$$

$$n=\log_2\left(\log_2\frac{2+2^{-b}}{2^{-b}}\right)-\log_2\left(\log_2\left|\frac{x_{0}+\sqrt y}{x_{0}-\sqrt y}\right|\right).$$

The first term relates to the desired accuracy. The second is a penalty you pay for providing an inaccurate initial estimate.

If the floating-point representation of $y$ is available, a very good starting approximation is obtained by setting the mantissa to $1$ and halving the exponent (with rounding). This results in an estimate which is at worse a factor $\sqrt 2$ away from the true square root.

$$n=\log_2\left(\log_2\left(2^{b+1}+1\right)-\log_2\left(\log_2\frac{\sqrt 2+1}{\sqrt 2-1}\right)\right) \approx\log_2(b+1)-1.35.$$ In the case of single precision (23 bits mantissa), 4 iterations are always enough. For double precision (52 bits), 5 iterations.

On the opposite, if $1$ is used as a start and $y$ is much larger, $\log_2\left|\frac{1+\sqrt y}{1-\sqrt y}\right|$ is close to $\frac{2}{\ln(2)\sqrt y}$ and the formula degenerates to $$n\approx\log_2(b+1)+\log_2(\sqrt y)-1.53.$$ Quadratic convergence is lost as the second term is linear in the exponent of $y$.