Do any JVM's JIT compilers generate code that uses vectorized floating point instructions?

JavaFloating PointJitSseVectorization

Java Problem Overview


Let's say the bottleneck of my Java program really is some tight loops to compute a bunch of vector dot products. Yes I've profiled, yes it's the bottleneck, yes it's significant, yes that's just how the algorithm is, yes I've run Proguard to optimize the byte code, etc.

The work is, essentially, dot products. As in, I have two float[50] and I need to compute the sum of pairwise products. I know processor instruction sets exist to perform these kind of operations quickly and in bulk, like SSE or MMX.

Yes I can probably access these by writing some native code in JNI. The JNI call turns out to be pretty expensive.

I know you can't guarantee what a JIT will compile or not compile. Has anyone ever heard of a JIT generating code that uses these instructions? and if so, is there anything about the Java code that helps make it compilable this way?

Probably a "no"; worth asking.

Java Solutions


Solution 1 - Java

So, basically, you want your code to run faster. JNI is the answer. I know you said it didn't work for you, but let me show you that you are wrong.

Here's Dot.java:

import java.nio.FloatBuffer;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;

@Platform(include = "Dot.h", compiler = "fastfpu")
public class Dot {
    static { Loader.load(); }

    static float[] a = new float[50], b = new float[50];
    static float dot() {
        float sum = 0;
        for (int i = 0; i < 50; i++) {
            sum += a[i]*b[i];
        }
        return sum;
    }
    static native @MemberGetter FloatPointer ac();
    static native @MemberGetter FloatPointer bc();
    static native @NoException float dotc();

    public static void main(String[] args) {
        FloatBuffer ab = ac().capacity(50).asBuffer();
        FloatBuffer bb = bc().capacity(50).asBuffer();

        for (int i = 0; i < 10000000; i++) {
            a[i%50] = b[i%50] = dot();
            float sum = dotc();
            ab.put(i%50, sum);
            bb.put(i%50, sum);
        }
        long t1 = System.nanoTime();
        for (int i = 0; i < 10000000; i++) {
            a[i%50] = b[i%50] = dot();
        }
        long t2 = System.nanoTime();
        for (int i = 0; i < 10000000; i++) {
            float sum = dotc();
            ab.put(i%50, sum);
            bb.put(i%50, sum);
        }
        long t3 = System.nanoTime();
        System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
        System.out.println("dotc(): "  + (t3 - t2)/10000000 + " ns");
    }
}

and Dot.h:

float ac[50], bc[50];

inline float dotc() {
    float sum = 0;
    for (int i = 0; i < 50; i++) {
        sum += ac[i]*bc[i];
    }
    return sum;
}

We can compile and run that with JavaCPP using this command:

$ java -jar javacpp.jar Dot.java -exec

With an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, Fedora 30, GCC 9.1.1, and OpenJDK 8 or 11, I get this kind of output:

dot(): 39 ns
dotc(): 16 ns

Or roughly 2.4 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.

Solution 2 - Java

To address some of the scepticism expressed by others here I suggest anyone who wants to prove to themselves or other use the following method:

  • Create a JMH project
  • Write a small snippet of vectorizable math.
  • Run their benchmark flipping between -XX:-UseSuperWord and -XX:+UseSuperWord(default)
  • If no difference in performance is observed, your code probably didn't get vectorized
  • To make sure, run your benchmark such that it prints out the assembly. On linux you can enjoy the perfasm profiler('-prof perfasm') have a look and see if the instructions you expect get generated.

Example:

@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE) //makes looking at assembly easier
public void inc() {
    for (int i=0;i<a.length;i++)
        a[i]++;// a is an int[], I benchmarked with size 32K
}

The result with and without the flag (on recent Haswell laptop, Oracle JDK 8u60): -XX:+UseSuperWord : 475.073 ± 44.579 ns/op (nanoseconds per op) -XX:-UseSuperWord : 3376.364 ± 233.211 ns/op

The assembly for the hot loop is a bit much to format and stick in here but here's a snippet(hsdis.so is failing to format some of the AVX2 vector instructions so I ran with -XX:UseAVX=1): -XX:+UseSuperWord(with '-prof perfasm:intelSyntax=true')

  9.15%   10.90%  │││ │↗    0x00007fc09d1ece60: vmovdqu xmm1,XMMWORD PTR [r10+r9*4+0x18]
 10.63%    9.78%  │││ ││    0x00007fc09d1ece67: vpaddd xmm1,xmm1,xmm0
 12.47%   12.67%  │││ ││    0x00007fc09d1ece6b: movsxd r11,r9d
  8.54%    7.82%  │││ ││    0x00007fc09d1ece6e: vmovdqu xmm2,XMMWORD PTR [r10+r11*4+0x28]
                  │││ ││                                                  ;*iaload
                  │││ ││                                                  ; - psy.lob.saw.VectorMath::inc@17 (line 45)
 10.68%   10.36%  │││ ││    0x00007fc09d1ece75: vmovdqu XMMWORD PTR [r10+r9*4+0x18],xmm1
 10.65%   10.44%  │││ ││    0x00007fc09d1ece7c: vpaddd xmm1,xmm2,xmm0
 10.11%   11.94%  │││ ││    0x00007fc09d1ece80: vmovdqu XMMWORD PTR [r10+r11*4+0x28],xmm1
                  │││ ││                                                  ;*iastore
                  │││ ││                                                  ; - psy.lob.saw.VectorMath::inc@20 (line 45)
 11.19%   12.65%  │││ ││    0x00007fc09d1ece87: add    r9d,0x8            ;*iinc
                  │││ ││                                                  ; - psy.lob.saw.VectorMath::inc@21 (line 44)
  8.38%    9.50%  │││ ││    0x00007fc09d1ece8b: cmp    r9d,ecx
                  │││ │╰    0x00007fc09d1ece8e: jl     0x00007fc09d1ece60  ;*if_icmpge

Have fun storming the castle!

Solution 3 - Java

In HotSpot versions beginning with Java 7u40, the server compiler provides support for auto-vectorisation. According to JDK-6340864

However, this seems to be true only for "simple loops" - at least for the moment. For example, accumulating an array cannot be vectorised yet JDK-7192383

Solution 4 - Java

Here is nice article about experimenting with Java and SIMD instructions written by my friend: http://prestodb.rocks/code/simd/

Its general outcome is that you can expect JIT to use some SSE operations in 1.8 (and some more in 1.9). Though you should not expect much and you need to be careful.

Solution 5 - Java

You could write OpenCl kernel to do the computing and run it from java http://www.jocl.org/.

Code can be run on CPU and/or GPU and OpenCL language supports also vector types so you should be able to take explicitly advantage of e.g. SSE3/4 instructions.

Solution 6 - Java

Have a look at Performance comparison between Java and JNI for optimal implementation of computational micro-kernels. They show that Java HotSpot VM server compiler supports auto-vectorization using Super-word Level Parallelism, which is limited to simple cases of inside the loop parallelism. This article will also give you some guidance whether your data size is large enough to justify going JNI route.

Solution 7 - Java

I'm guessing you wrote this question before you found out about netlib-java ;-) it provides exactly the native API you require, with machine optimised implementations, and does not have any cost at the native boundary due thanks to memory pinning.

Solution 8 - Java

Java 16 introduced the Vector API (JEP 417, JEP 414, JEP 338). It is currently "incubating" (ie, beta), although anyone can use it. It will probably become GA in Java 19 or 20.

It's a little verbose, but is meant to be reliable and portable.

The following code can be rewritten:

void scalarComputation(float[] a, float[] b, float[] c) {
   assert a.length == b.length && b.length == c.length;
   for (int i = 0; i < a.length; i++) {
        c[i] = (a[i] * a[i] + b[i] * b[i]) * -1.0f;
   }
}

Using the Vector API:

static final VectorSpecies<Float> SPECIES = FloatVector.SPECIES_PREFERRED;

void vectorComputation(float[] a, float[] b, float[] c) {
    assert a.length == b.length && b.length == c.length;
    int i = 0;
    int upperBound = SPECIES.loopBound(a.length);
    for (; i < upperBound; i += SPECIES.length()) {
        // FloatVector va, vb, vc;
        var va = FloatVector.fromArray(SPECIES, a, i);
        var vb = FloatVector.fromArray(SPECIES, b, i);
        var vc = va.mul(va)
                   .add(vb.mul(vb))
                   .neg();
        vc.intoArray(c, i);
    }
    for (; i < a.length; i++) {
        c[i] = (a[i] * a[i] + b[i] * b[i]) * -1.0f;
    }
}

Newer builds (ie, Java 18) are trying to get rid of that last for loop using predicate instructions, but support for that is still supposedly spotty.

Solution 9 - Java

I dont believe most if any VMs are ever smart enough for this sort of optimisations. To be fair most optimisations are much simpler, such as shifting instead of multiplication whena power of two. The mono project introduced their own vector and other methods with native backings to help performance.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionSean OwenView Question on Stackoverflow
Solution 1 - JavaSamuel AudetView Answer on Stackoverflow
Solution 2 - JavaNitsan WakartView Answer on Stackoverflow
Solution 3 - JavaVedranView Answer on Stackoverflow
Solution 4 - JavakokosingView Answer on Stackoverflow
Solution 5 - JavaMikael LepistöView Answer on Stackoverflow
Solution 6 - JavaPaul JurczakView Answer on Stackoverflow
Solution 7 - JavafommilView Answer on Stackoverflow
Solution 8 - JavaAleksandr DubinskyView Answer on Stackoverflow
Solution 9 - JavamP.View Answer on Stackoverflow