Why is processing an unsorted array the same speed as processing a sorted array with modern x86-64 clang?
Solution 1:
Several of the answers in the question you link talk about rewriting the code to be branchless and thus avoiding any branch prediction issues. That's what your updated compiler is doing.
Specifically, clang++ 10 with -O3
vectorizes the inner loop. See the code on godbolt, lines 36-67 of the assembly. The code is a little bit complicated, but one thing you definitely don't see is any conditional branch on the data[c] >= 128
test. Instead it uses vector compare instructions (pcmpgtd
) whose output is a mask with 1s for matching elements and 0s for non-matching. The subsequent pand
with this mask replaces the non-matching elements by 0, so that they do not contribute anything when unconditionally added to the sum.
The rough C++ equivalent would be
sum += data[c] & -(data[c] >= 128);
The code actually keeps two running 64-bit sum
s, for the even and odd elements of the array, so that they can be accumulated in parallel and then added together at the end of the loop.
Some of the extra complexity is to take care of sign-extending the 32-bit data
elements to 64 bits; that's what sequences like pxor xmm5, xmm5 ; pcmpgtd xmm5, xmm4 ; punpckldq xmm4, xmm5
accomplish. Turn on -mavx2
and you'll see a simpler vpmovsxdq ymm5, xmm5
in its place.
The code also looks long because the loop has been unrolled, processing 8 elements of data
per iteration.