Why doesn't the JVM cache JIT compiled code?

The canonical JVM implementation from Sun applies some pretty sophisticated optimization to bytecode to obtain near-native execution speeds after the code has been run a few times.

The question is, why isn't this compiled code cached to disk for use during subsequent uses of the same function/class?

As it stands, every time a program is executed, the JIT compiler kicks in afresh, rather than using a pre-compiled version of the code. Wouldn't adding this feature add a significant boost to the initial run time of the program, when the bytecode is essentially being interpreted?


Without resorting to cut'n'paste of the link that @MYYN posted, I suspect this is because the optimisations that the JVM performs are not static, but rather dynamic, based on the data patterns as well as code patterns. It's likely that these data patterns will change during the application's lifetime, rendering the cached optimisations less than optimal.

So you'd need a mechanism to establish whether than saved optimisations were still optimal, at which point you might as well just re-optimise on the fly.


Oracle's JVM is indeed documented to do so -- quoting Oracle,

the compiler can take advantage of Oracle JVM's class resolution model to optionally persist compiled Java methods across database calls, sessions, or instances. Such persistence avoids the overhead of unnecessary recompilations across sessions or instances, when it is known that semantically the Java code has not changed.

I don't know why all sophisticated VM implementations don't offer similar options.


An updated to the existing answers - Java 8 has a JEP dedicated to solving this:

=> JEP 145: Cache Compiled Code. New link.

At a very high level, its stated goal is:

Save and reuse compiled native code from previous runs in order to improve the startup time of large Java applications.

Hope this helps.


Excelsior JET has a caching JIT compiler since version 2.0, released back in 2001. Moreover, its AOT compiler may recompile the cache into a single DLL/shared object using all optimizations.