Performance of Java matrix math libraries? [closed]

I'm the author of Java Matrix Benchmark (JMatBench) and I'll give my thoughts on this discussion.

There are significant difference between Java libraries and while there is no clear winner across the whole range of operations, there are a few clear leaders as can be seen in the latest performance results (October 2013).

If you are working with "large" matrices and can use native libraries, then the clear winner (about 3.5x faster) is MTJ with system optimised netlib. If you need a pure Java solution then MTJ, OjAlgo, EJML and Parallel Colt are good choices. For small matrices EJML is the clear winner.

The libraries I did not mention showed significant performance issues or were missing key features.


Just to add my 2 cents. I've compared some of these libraries. I attempted to matrix multiply a 3000 by 3000 matrix of doubles with itself. The results are as follows.

Using multithreaded ATLAS with C/C++, Octave, Python and R, the time taken was around 4 seconds.

Using Jama with Java, the time taken was 50 seconds.

Using Colt and Parallel Colt with Java, the time taken was 150 seconds!

Using JBLAS with Java, the time taken was again around 4 seconds as JBLAS uses multithreaded ATLAS.

So for me it was clear that the Java libraries didn't perform too well. However if someone has to code in Java, then the best option is JBLAS. Jama, Colt and Parallel Colt are not fast.


I'm the main author of jblas and wanted to point out that I've released Version 1.0 in late December 2009. I worked a lot on the packaging, meaning that you can now just download a "fat jar" with ATLAS and JNI libraries for Windows, Linux, Mac OS X, 32 and 64 bit (except for Windows). This way you will get the native performance just by adding the jar file to your classpath. Check it out at http://jblas.org!