Fastest way to find if a given number is prime

The answer depends on the size.

For numbers less than $2^{64}$ what I've found best is:

  • Simple mod tests for small primes (2, 3, 5, ...). I check primes up to 53, you would want to benchmark to find out what works best for you. Just simple mod operations work well here. Remember to take into account tiny inputs.
  • For inputs that fit in 32 bits: (1) use 32-bit types including for the pretest above, (2) look at trial division with a mod-30 wheel for very small inputs, and (3) consider using a hashed single-base M-R test (see Forišek and Jančina 2015 or my code linked below). You only need a table of 256 shorts to cover the entire range with one test. If you don't want to do the hashed single test, you can use two M-R tests (bases 31,73) if less than 9080191, three (bases 2,7,61) if over. These are deterministic tests with no counterexamples.
  • For 64-bit it depends on your implementations, but I've found using the above method for "small" (32-bit) inputs, and an AES BPSW test for larger values to work best. Also look at the best SPRP bases for ideas for a stepped solution including deterministic results in no more than 7 tests. There are also hashed solutions using no more than 3 tests. Again these are all deterministic if implemented correctly.

I suggest doing a verification using both small inputs and the Feitsma SPRP-2 pseudoprime database to make sure your code works. There have been some unfortunate incidents where primality tests in oft-used libraries had errors.

Ideally you will want optimized code for the tests. E.g. Izykowski's x86-64 M-R code or my examples (the latter uses Izykowski's x86-64 Montgomery routines if possible). My code also shows use of the pre-test, trial division with mod-30 wheel, hashed single-base M-R for 32-bit inputs, and AES BPSW for 64-bit.

For numbers larger than $2^{64}$ you're presumably using something like GMP. The exact set of operations that give the best result will depend heavily on your bigint library and your platform. What I do using GMP, which is all heuristics to start:

  • If a tiny input, send to simple trial division. Ideally you would send any 64-bit number to the full solution above.
  • Small primes: mpz_even_p followed by mpz_gcd_ui with one or two ui-sized primorials.
  • Larger primes: mpz_gcd with a calculated-once primorial. GMP's gcd operation is very well optimized for large inputs, and works significantly faster than doing mpz_divisible_p on each number. Benchmark and adjust as desired. My solution is a compromise since I don't want the computation or memory use given that people may not call it with large inputs, so I do a gcd with primes less than 1009 first, then if more than 700 bits I'll do primes less than 40000, and if more than 300 bits primes less than 10000. Again these were just examples that worked for my application, but whatever actual numbers you use, this is a fast pre-test.
  • See Menezes section 4.45 or Park's ISPEC 2005 paper for discussion of time spent on trial division vs. primality test. For larger inputs (e.g. 1600 or more bits) I do a more divisibility tests. For under 3000 bits just a loop of mpz_divisible_ui_p, over that I do a simple treesieve. You could obviously skip this (Pari does, for instance), but it does make things a lot faster for inputs of 10k+ digits.
  • BPSW. This is a base 2 M-R test followed by a Lucas test. IMO you should be doing a strong test (it's not only better than the standard test but it's faster) or extra strong which can be even faster. Pari does an almost-extra-strong test which also works well.
  • At this point you can stop and call it probably prime (what most people do), or add one or more random-base M-R tests and still call it probably prime (Mathematica adds one with base 3, I add 1-5 depending on size, FIPS 186-4 adds a lot), or you could do a proof.

For proofs, here are a few points to consider:

  • For numbers under $2^{64}$ you should have used BPSW or deterministic M-R. Done.
  • Anyone who suggests actually using AKS in practice has never actually run it on numbers larger than 10,000 and should be ignored.
  • You should always use a test like BPSW first, as there is no point wasting time on expensive proofs for composites. Most primality proof code will already do this.
  • There are significantly faster methods for numbers of particular forms, e.g. the Lucas-Lehmer test for Mersenne numbers.
  • Proofs can be surprisingly fast for "small" numbers. A proof for a 40 digit prime can be done in a millisecond or two. 100 digits takes only 30-50 milliseconds. We can do 200 digits in under a second.
  • The growth rate for modern methods is on the order of the fourth power of the size, so while it starts out nice and fast, it is not fast at all for large numbers. It will take an hour or so for numbers in the 1200 digit range (Primo given multiple cores can be faster, but the point stands). 20k digits will take a multi-core machine working for 1-4 months to complete a proof.
  • BLS75 methods and ECPP give a certificate. These can be verified on another computer using completely different software, and completed much faster than the actual proof. This is a huge advantage over methods like APR-CL and AKS which say "trust me, it's prime. Unless my program had a bug or my hardware was overclocked and did something wrong." For very large numbers, you really want a certificate of some sort.

Some available proof software:

  • Primo. Fastest public primality proof software for large inputs. Highly recommended. Not open source, but freely available and used for many years. It uses a UI so not well suited for automated processes. Much faster than anything else I know past 1500 digits or so.
  • WraithX's APR-CL. Easy to use in a GMP program, and pretty fast.
  • Pari/GP. APR-CL, not quite as fast as WraithX's, but if you have GP installed it's super easy to use.
  • ECPP-DJ. My standalone ECPP proof software. Includes BPSW, AKS, ECPP, BLS75 theorem 5. Outputs certificates. Includes a certificate verifier for Primo and its own format. Single threaded. With the large poly set it benchmarks faster than anything else I've tried up to about 500 digits. Not well tuned for inputs larger than 2k digits however.
  • OpenPFGW. Has some good BLS75-style tests (tests involving finding partial factors of N-1 and/or N+1). No certificates. Its outstanding feature comes when dealing with multi-thousand digit numbers, where the use of Woltman's fast bignum code lets it do very fast Fermat tests. This makes a good pre-test when dealing with big numbers.

There are research groups with private code that also do primality proofs, but that doesn't help the rest of us. On the other hand, these groups sometimes publish papers describing their methods, which is of immense value. Atkin and Morain's papers are great stuff and allowed others to write independent software.


For small numbers trial division is best. Compute the square root of the number and check that no prime less than it divides the number. You may find it useful to use a product of primes that (just) fit inside a machine word, to reduce the number of divisions needed; in that case you need to take the GCD rather than just dividing.

For somewhat larger numbers, say $10^6$ to $2^{64}$, I suggest the BPSW test which has been shown to have no counterexamples below $2^{64}$.

For larger numbers fastECPP is probably the best available algorithm.