Efficient way to determine if a number is Perfect Square

Is there an efficient method to determine if a very large number (300 - 600 digits) is a perfect square without finding its square root. I tried finding the square root but I realized that even for perfect squares, it wasn't efficient (I used newton's approximation method and it was coded in JAVA). I read somewhere about using modular arithmetic but I'm not quite sure how to do that. Can anyone give me a hint on how to proceed?

Thanks in advance.


Faster than binary search is to use an integer version of Newton's method: for $\sqrt{A}$ and an initial guess $x_0$ of the right order of magnitude, let $x_{n+1} = \left \lfloor \frac{x_n^2 + A}{2 x_n} \right \rfloor$. I tried a 1200-digit number for $A$, with $x_0$ a 600-digit number, and $x_{10}$ was already correct. In Maple 15 on a not-very-fast computer, the CPU time was given as 0.


I agree with Yuval Filmus that you should use binary search, but in case you're still curious about the approach using modular arithmetic: a number $a$ is a square $\bmod p$ for $p$ a prime not dividing $a$ if and only if the Jacobi symbol $\left( \frac{a}{p} \right)$ is equal to $1$. You can efficiently compute the Jacobi symbol using quadratic reciprocity, and if you get an answer of $1$ for $n$ primes, then $a$ is square with probability about $1 - \frac{1}{2^n}$.

(Alternately, $a$ is a square $\bmod p$ if and only if $a^{ \frac{p-1}{2} } \equiv 1 \bmod p$. I don't know how the efficiency of computing this compares to the efficiency of computing the Jacobi symbol.)


The square root can be found using binary search. The resulting algorithm should run very fast on a 600 digit number, my guess is under a second.

You can implement the binary search without repeated squaring - each step you're only shifting and adding. That's why it's so fast. But even if you were squaring at each step, it would still be very quick and certainly feasible.

Any reasonable bignum package will contain a function computing the square root, so you don't even need to code the binary search yourself.


I'm late to the party. But, 2-adic Newton is pretty fast. Here's a 64-bit version:

/* If x is square, return +sqrt(x).
 * Otherwise, return -1.
 */
long long isqrt(unsigned long long x) {
    /* Precondition: make x odd if possible. */
    int sh = __builtin_ctzll(x);
    x >>= (sh&~1);

    /* Early escape. */
    if (x&6) return -1;

    /* 2-adic Newton */
    int i;
    const int ITERATIONS = 5; // log log x - 1
    unsigned long long z = (3-x)>>1, y=x*z;
    for (i=0; i<ITERATIONS; i++) {
        unsigned long long w = (3 - z*y) >> 1;
        y *= w;
        z *= w;
    }
    assert(x==0 || (y*z == 1 && x*z == y));

    /* Get the positive square root.  Also the top bit
     * might be wrong. */
    if (y & (1ull<<62)) y = -y;
    y &= ~(1ull<<63);

    /* Is it the square root in Z? */
    if (y >= 1ull<<32) return -1;

    /* Yup. */
    return y<<(sh/2);
}

The function listed above takes 31 Haswell cycles when inlined. So it's about half as fast as the double-precision sqrtsd() call. It computes the square root of its argument, which is not what OP asked for. But it takes advantage of the fact that you only care about integer square roots. As a result, it will be much faster on extended-precision integers than an approximate square root algorithm.

This works for longer integer types, with a few changes.

  • It's a fixed-precision algorithm. If you use mpn_t for this, it will allocate new limbs instead of wrapping which is not what you want. You will need to explicitly clamp to the lowest n limbs.
  • You need log log x - 1 ITERATIONS. So if x is 2048 bits, you need 10 instead of 5. That is, it's Newton's method so each iteration doubles the precision.
  • You can compute the first 4 iterations with longs if you want.
  • You need to replace the 32 with half the number of bits of your integer type, likewise 62 and 63 with 2 less and 1 less.