Do any JVM's JIT compilers generate code that uses vectorized floating point instructions?
So, basically, you want your code to run faster. JNI is the answer. I know you said it didn't work for you, but let me show you that you are wrong.
Here's Dot.java
:
import java.nio.FloatBuffer;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;
@Platform(include = "Dot.h", compiler = "fastfpu")
public class Dot {
static { Loader.load(); }
static float[] a = new float[50], b = new float[50];
static float dot() {
float sum = 0;
for (int i = 0; i < 50; i++) {
sum += a[i]*b[i];
}
return sum;
}
static native @MemberGetter FloatPointer ac();
static native @MemberGetter FloatPointer bc();
static native @NoException float dotc();
public static void main(String[] args) {
FloatBuffer ab = ac().capacity(50).asBuffer();
FloatBuffer bb = bc().capacity(50).asBuffer();
for (int i = 0; i < 10000000; i++) {
a[i%50] = b[i%50] = dot();
float sum = dotc();
ab.put(i%50, sum);
bb.put(i%50, sum);
}
long t1 = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
a[i%50] = b[i%50] = dot();
}
long t2 = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
float sum = dotc();
ab.put(i%50, sum);
bb.put(i%50, sum);
}
long t3 = System.nanoTime();
System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
System.out.println("dotc(): " + (t3 - t2)/10000000 + " ns");
}
}
and Dot.h
:
float ac[50], bc[50];
inline float dotc() {
float sum = 0;
for (int i = 0; i < 50; i++) {
sum += ac[i]*bc[i];
}
return sum;
}
We can compile and run that with JavaCPP using this command:
$ java -jar javacpp.jar Dot.java -exec
With an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, Fedora 30, GCC 9.1.1, and OpenJDK 8 or 11, I get this kind of output:
dot(): 39 ns
dotc(): 16 ns
Or roughly 2.4 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.
To address some of the scepticism expressed by others here I suggest anyone who wants to prove to themselves or other use the following method:
- Create a JMH project
- Write a small snippet of vectorizable math.
- Run their benchmark flipping between -XX:-UseSuperWord and -XX:+UseSuperWord(default)
- If no difference in performance is observed, your code probably didn't get vectorized
- To make sure, run your benchmark such that it prints out the assembly. On linux you can enjoy the perfasm profiler('-prof perfasm') have a look and see if the instructions you expect get generated.
Example:
@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE) //makes looking at assembly easier
public void inc() {
for (int i=0;i<a.length;i++)
a[i]++;// a is an int[], I benchmarked with size 32K
}
The result with and without the flag (on recent Haswell laptop, Oracle JDK 8u60): -XX:+UseSuperWord : 475.073 ± 44.579 ns/op (nanoseconds per op) -XX:-UseSuperWord : 3376.364 ± 233.211 ns/op
The assembly for the hot loop is a bit much to format and stick in here but here's a snippet(hsdis.so is failing to format some of the AVX2 vector instructions so I ran with -XX:UseAVX=1): -XX:+UseSuperWord(with '-prof perfasm:intelSyntax=true')
9.15% 10.90% │││ │↗ 0x00007fc09d1ece60: vmovdqu xmm1,XMMWORD PTR [r10+r9*4+0x18]
10.63% 9.78% │││ ││ 0x00007fc09d1ece67: vpaddd xmm1,xmm1,xmm0
12.47% 12.67% │││ ││ 0x00007fc09d1ece6b: movsxd r11,r9d
8.54% 7.82% │││ ││ 0x00007fc09d1ece6e: vmovdqu xmm2,XMMWORD PTR [r10+r11*4+0x28]
│││ ││ ;*iaload
│││ ││ ; - psy.lob.saw.VectorMath::inc@17 (line 45)
10.68% 10.36% │││ ││ 0x00007fc09d1ece75: vmovdqu XMMWORD PTR [r10+r9*4+0x18],xmm1
10.65% 10.44% │││ ││ 0x00007fc09d1ece7c: vpaddd xmm1,xmm2,xmm0
10.11% 11.94% │││ ││ 0x00007fc09d1ece80: vmovdqu XMMWORD PTR [r10+r11*4+0x28],xmm1
│││ ││ ;*iastore
│││ ││ ; - psy.lob.saw.VectorMath::inc@20 (line 45)
11.19% 12.65% │││ ││ 0x00007fc09d1ece87: add r9d,0x8 ;*iinc
│││ ││ ; - psy.lob.saw.VectorMath::inc@21 (line 44)
8.38% 9.50% │││ ││ 0x00007fc09d1ece8b: cmp r9d,ecx
│││ │╰ 0x00007fc09d1ece8e: jl 0x00007fc09d1ece60 ;*if_icmpge
Have fun storming the castle!
In HotSpot versions beginning with Java 7u40, the server compiler provides support for auto-vectorisation. According to JDK-6340864
However, this seems to be true only for "simple loops" - at least for the moment. For example, accumulating an array cannot be vectorised yet JDK-7192383
Here is nice article about experimenting with Java and SIMD instructions written by my friend: http://prestodb.rocks/code/simd/
Its general outcome is that you can expect JIT to use some SSE operations in 1.8 (and some more in 1.9). Though you should not expect much and you need to be careful.
You could write OpenCl kernel to do the computing and run it from java http://www.jocl.org/.
Code can be run on CPU and/or GPU and OpenCL language supports also vector types so you should be able to take explicitly advantage of e.g. SSE3/4 instructions.