Primality testing for 64 bit numbers
For very small numbers, say 32 bits unsigned, testing all divisors up to the square root is a very decent approach. Some optimizations can be made to avoid trying all divisors, but these yield marginal improvements. The complexity remains $O(\sqrt n)$.
On the other hand, much faster primality tests are available, but they are pretty sophisticated and deploy their efficiency for much longer numbers.
Is there an intermediate solution, i.e. a relatively simple algorithm, that is of practical use for, say, 64 bits unsigned, with a target running time under 1 ms ?
I am not after micro-optimization of the exhaustive division method. I am after a better working principle, of a reasonable complexity (and of the deterministic type).
Update:
Using a Python version of the Miller-Rabin test from Rosetta code, the time for the prime $2^{64}-59=18446744073709551557$ is $0.7$ ms. (Though this is not a sufficient test because nothing says we are in a worst case.)
http://rosettacode.org/wiki/Miller%E2%80%93Rabin_primality_test#Python:_Proved_correct_up_to_large_N
And I guess that this code can be improved for speed.
Solution 1:
I think some version of the (deterministic) Miller-Rabin Test should do the trick. It is usually used as a probabilistic test for whopping big numbers, but it can be repurposed as a deterministic test for smaller fixed ranges.
The core of the Miller-Rabin test is the notion of a "probable prime" for some base $a$. Specifically, let $n$ be an odd prime, and write $n - 1 = d \times 2^s$ for some odd $d$. Then, it follows that $a^d \equiv 1 \pmod{n}$ or $a^{d 2^r} \equiv -1 \pmod{n}$ for some $0 \leq r < s$. (The wikipedia page has reasoning for this).
If $n$ is any number, and $a<n$, we could run the same test, and if that test passed we would call $n$ a strong pseudoprime for base $a$. The usual Miller-Rabin test is based on doing this for a lot of different (randomly chosen) $a$, and some number-theoretic argument saying that if $n$ is composite, the probability of not finding an $a$ demonstrating this is vanishingly small (after trying a lot of them).
However, if Wikipedia is to be believed (I haven't followed up the reference), then for $n < 2^{64}$ it is sufficient to test $a \in \{2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37\}$. Somehow there are no composite numbers below $2^{64}$ which are a strong pseudoprime for all these $a$.
The above test is very fast, and altogether prime testing would require at most $12 \times 64$ modular exponentiations (this will be well below 1ms). For smaller $n$, you could even use a smaller list of possible $a$'s.
Solution 2:
oeis/A014233 says:
The primality of numbers $\lt 2^{64}$ can be determined by asserting strong pseudoprimality to all prime bases $\le 37$.
The reference is the recent paper Strong pseudoprimes to twelve prime bases by Sorenson and Webster.
For code, see Prime64 and also the primes programs in FreeBSD, especially spsp.c.