Fastest Square Root Algorithm
(edit, 9 years later... hello smart contract developers, I know that's why you're here lol)
What is the fastest algorithm for finding the square root of a number?
I created one that can find the square root of "$987654321$" to $16$ decimal places in just $20$ iterations
I've now tried Newton's method as well as my own method (Newtons code as seen below)
What is the fastest known algorithm for taking the second root of a number?
My code for Newton's Method (*Edit: there was an error in my code, it is fixed in the comments below):
a=2 //2nd root
b=97654321 //base
n=1 //initial guess
c=0 //current iteration (this is a changing variable)
r=500000 //total number of iterations to run
while (c<r)
{
m = n-(((n^a)-b)/(a*b)) //Newton's algorithm
n=m
c++;
trace(m + " <--guess ... iteration--> " + c)
}
Solution 1:
If you use Halley's method, you exhibit cubic convergence! This method is second in the class of Householder's methods.
Halley's method is:
$$
x_{n+1} = x_n - \frac{2f(x_n)f'(x_n)}{2[f'(x_n)]^2-f(x_n)f''(x_n)}
$$
If we let $$f(x) = x^2 - a$$
which meets the criteria, (continuous second derivative)
Then Halley's method is:
$$
x_{n+1} = x_n - \frac{\left(2x_n^3 - 2ax_n\right)}{3x_n^2 + a}
$$
Which has the simplification:
$$
x_{n+1} = \frac{x_n^3 + 3ax_n}{3x_n^2 + a}
$$
I also will add this document which discusses extensions of the newtonian method.
There exists an extension due to Potra and Pták called the “two-step method” that may be re-written as the iterative scheme
$$x_{n+1} =x_n − \frac{f(x_n)+f\left(x_n − \frac{f(x_n)}{f′(x_n)}\right)}{f'(x_n)}$$
that converges cubically in some neighborhood of of the root $x^{*}$ which does not require the computation of the second derivative.
See: On Newton-type methods with cubic convergence for more information on this topic.
As Hurkyl and others have noted, your best bet is to just use Newton's Method. These alternative methods generally come with more operations per iteration. They aren't really worth the computational cost, but they are a good comparison.
Solution 2:
Newton's method for solving $f(x)=x^2-N=0$ leads to the recurrence
$x_{n+1}=x_n-\frac{x_n^2-N}{2x_n}=\frac{x_n+N/x_n}2$, also known as Heron's method. Since $f'(x)\ne0$ at the root, the convergence is quadratic (i.e. the number of correct decimals doubles with each step once a threshold precision is reached).
The results depend on the starting value, of course. Simply guessing $x_0=1$ leads to
$$x_{1} = 493827161.00000000000\\
x_{2} = 246913581.49999999899\\
x_{3} = 123456792.74999998937\\
x_{4} = 61728400.374999909634\\
x_{5} = 30864208.187499266317\\
x_{6} = 15432120.093744108961\\
x_{7} = 7716092.0468278285538\\
x_{8} = 3858110.0230600438248\\
x_{9} = 1929183.0086989850523\\
x_{10} = 964847.48170274167713\\
x_{11} = 482935.55973452582660\\
x_{12} = 242490.33277426247529 \\
x_{13} = 123281.64823302696814 \\
x_{14} = 65646.506775513694016 \\
x_{15} = 40345.773393104621684 \\
x_{16} = 32412.760144718719221 \\
x_{17} = 31441.958847358050036 \\
x_{18} = 31426.971626562861740 \\
x_{19} = 31426.968052932067262 \\
x_{20} = 31426.968052931864079 $$
with small enough error.
Solution 3:
Not a bona fide "alogrithm", but a cute hack nevertheless that I once used in code that required taking an inverse square root millions of times (back when I was doing computational astrophysics) is found here:
http://en.wikipedia.org/wiki/Fast_inverse_square_root
It does use a few iterations of Newton's method, but only after some very, very clever trickery.
I remember naively using trial-and-error optimization to find a "magic number" that would come closest to a direct square root, though of course it was much slower (still faster than just called "sqrt" from math.h) and had a higher error than the above hack.
Solution 4:
I just noticed nobody's pointed out the following trick: to compute $1/\sqrt{n}$, Newton's method used to find a root of $f(y) = 1/y^2 - n$ gives the iteration
$$ y \leftarrow \frac{3y - ny^3}{2}$$
I believe that in some ranges, it is faster to compute an estimate of $\sqrt{n}$ by using Newton's method to first compute $1/\sqrt{n}$ then invert the answer than it is to use Newton's method directly.
It is likely faster to compute this as
$$ \frac{3y - ny^3}{2} = y - \frac{n y^2 - 1}{2}y$$
The point being that if $y$ is a good approximation of $1/\sqrt{n}$, then $n y^2 - 1$ is a good approximation of $0$, which reduces the amount of precision you need to keep around, and you can play tricks to speed up the calculation of a product if you already know many of its digits.
But it's possible you might be able to do even better by computing an approximation of both $x \sim \sqrt{n}$ and $y \sim 1/\sqrt{n}$ simultaneously. I haven't worked through the details.
The reason to hope that this might work out is that a better update calculation for $x$ may be available because $ny \sim n/x$, and then a faster way to update $y$ based on $y \sim 1/x$ rather than $y \sim 1/\sqrt{n}$.
Solution 5:
In case you meant not the theoretical speed but the algorithm that runs the fastest on a computer, then it's the "quake 3" algorithm or one of its derivatives which, I believe, is implemented as the GCC's sqrt
function at optimization levels 2 and 3. It's ironic that the determining factor here is a clever value and implementation of the initial condition rather than the actual iterative scheme. On my normal laptop it takes about 2.0e-09s to call sqrt with gcc -O2 or gcc -O3, which is about 2.8 times less than what the runner up, the standard implementation of sqrt
gives.