Optimization by Java Compiler
Recently, I was reading this article.
According to that article, Java Compiler i.e. javac does not perform any optimization while generating bytecode. Is it really true? If so, then can it be implemented as an intermediate code generator to remove redundancy and generate optimal code?
javac
will only do a very little optimization, if any.
The point is that the JIT compiler does most of the optimization - and it works best if it has a lot of information, some of which may be lost if javac
performed optimization too. If javac
performed some sort of loop unrolling, it would be harder for the JIT to do that itself in a general way - and it has more information about which optimizations will actually work, as it knows the target platform.
I stopped reading when I got to this section:
More importantly, the javac compiler does not perform simple optimizations like loop unrolling, algebraic simplification, strength reduction, and others. To get these benefits and other simple optimizations, the programmer must perform them on the Java source code and not rely on the javac compiler to perform them.
Firstly, doing loop unrolling on Java source code is hardly ever a good idea. The reason javac
doesn't do much in the way of optimization is that it's done by the JIT compiler in the JVM, which can make much better decisions that the compiler could, because it can see exactly which code is getting run the most.
The javac
compiler once supported an option to generate optimized bytecode by passing -o
on the command line.
However starting J2SE1.3, the HotSpot JVM was shipped with the platform, which introduced dynamic techniques such as just-in-time compilation and adaptive optimization of common execution paths. Hence the -o
was ignored by the Java compiler starting this version.
I came across this flag when reading about the Ant javac
task and its optimize
attribute:
Indicates whether source should be compiled with optimization; defaults to
off
. Note that this flag is just ignored by Sun'sjavac
starting with JDK 1.3 (since compile-time optimization is unnecessary).
The advantages of the HotSpot JVM's dynamic optimizations over compile-time optimization are mentioned in this page:
The Server VM contains an advanced adaptive compiler that supports many of the same types of optimizations performed by optimizing C++ compilers, as well as some optimizations that cannot be done by traditional compilers, such as aggressive inlining across virtual method invocations. This is a competitive and performance advantage over static compilers. Adaptive optimization technology is very flexible in its approach, and typically outperforms even advanced static analysis and compilation techniques.
I have studied outputted Java bytecode in the past (using an app called FrontEnd). It basically doesn't do any optimization, except for inlining constants (static finals) and precalculating fixed expressions (like 2*5 and "ab"+"cd"). This is part of why is is so easy to disassemble (using an app called JAD)
I also discovered some interesting points to optimize your java code with. It helped me improve speeds of inner-loops by 2.5 times.
A method has 5 quick-access variables. When these variables are called, they're faster than all other variables (probably because of stack maintainance). The parameters of a method are also counted to these 5. So if you have code inside for loop which is executed like a million times, allocate those variables at the start of the method, and have no parameters.
Local variables are also faster than fields, so if you use fields inside inner loops, cache these variables by assigning them to a local variable at the start of the method. Cache the reference not the contents. (like: int[] px = this.pixels;)