Why shouldn't I use PyPy over CPython if PyPy is 6.3 times faster?

NOTE: PyPy is more mature and better supported now than it was in 2013, when this question was asked. Avoid drawing conclusions from out-of-date information.


  1. PyPy, as others have been quick to mention, has tenuous support for C extensions. It has support, but typically at slower-than-Python speeds and it's iffy at best. Hence a lot of modules simply require CPython. PyPy doesn't support numpy. Some extensions are still not supported (Pandas, SciPy, etc.), take a look at the list of supported packages before making the change. Note that many packages marked unsupported on the list are now supported.
  2. Python 3 support is experimental at the moment. has just reached stable! As of 20th June 2014, PyPy3 2.3.1 - Fulcrum is out!
  3. PyPy sometimes isn't actually faster for "scripts", which a lot of people use Python for. These are the short-running programs that do something simple and small. Because PyPy is a JIT compiler its main advantages come from long run times and simple types (such as numbers). PyPy's pre-JIT speeds can be bad compared to CPython.
  4. Inertia. Moving to PyPy often requires retooling, which for some people and organizations is simply too much work.

Those are the main reasons that affect me, I'd say.


That site does not claim PyPy is 6.3 times faster than CPython. To quote:

The geometric average of all benchmarks is 0.16 or 6.3 times faster than CPython

This is a very different statement to the blanket statement you made, and when you understand the difference, you'll understand at least one set of reasons why you can't just say "use PyPy". It might sound like I'm nit-picking, but understanding why these two statements are totally different is vital.

To break that down:

  • The statement they make only applies to the benchmarks they've used. It says absolutely nothing about your program (unless your program is exactly the same as one of their benchmarks).

  • The statement is about an average of a group of benchmarks. There is no claim that running PyPy will give a 6.3 times improvement even for the programs they have tested.

  • There is no claim that PyPy will even run all the programs that CPython runs at all, let alone faster.


Because pypy is not 100% compatible, takes 8 gigs of ram to compile, is a moving target, and highly experimental, where cpython is stable, the default target for module builders for 2 decades (including c extensions that don't work on pypy), and already widely deployed.

Pypy will likely never be the reference implementation, but it is a good tool to have.


The second question is easier to answer: you basically can use PyPy as a drop-in replacement if all your code is pure Python. However, many widely used libraries (including some of the standard library) are written in C and compiled as Python extensions. Some of these can be made to work with PyPy, some can't. PyPy provides the same "forward-facing" tool as Python --- that is, it is Python --- but its innards are different, so tools that interface with those innards won't work.

As for the first question, I imagine it is sort of a Catch-22 with the first: PyPy has been evolving rapidly in an effort to improve speed and enhance interoperability with other code. This has made it more experimental than official.

I think it's possible that if PyPy gets into a stable state, it may start getting more widely used. I also think it would be great for Python to move away from its C underpinnings. But it won't happen for a while. PyPy hasn't yet reached the critical mass where it is almost useful enough on its own to do everything you'd want, which would motivate people to fill in the gaps.


I did a small benchmark on this topic. While many of the other posters have made good points about compatibility, my experience has been that PyPy isn't that much faster for just moving around bits. For many uses of Python, it really only exists to translate bits between two or more services. For example, not many web applications are performing CPU intensive analysis of datasets. Instead, they take some bytes from a client, store them in some sort of database, and later return them to other clients. Sometimes the format of the data is changed.

The BDFL and the CPython developers are a remarkably intelligent group of people and have a managed to help CPython perform excellent in such a scenario. Here's a shameless blog plug: http://www.hydrogen18.com/blog/unpickling-buffers.html . I'm using Stackless, which is derived from CPython and retains the full C module interface. I didn't find any advantage to using PyPy in that case.