SIMD prefix sum on Intel cpu

The fastest parallel prefix sum algorithm I know of is to run over the sum in two passes in parallel and use SSE as well in the second pass.

In the first pass you calculate partial sums in parallel and store the total sum for each partial sum. In the second pass you add the total sum from the preceding partial sum to the next partial sum. You can run both passes in parallel using multiple threads (e.g. with OpenMP). The second pass you can also use SIMD since a constant value is being added to each partial sum.

Assuming n elements of an array, m cores, and a SIMD width of w the time cost should be

n/m + n/(m*w) = (n/m)*(1+1/w)

Since the fist pass does not use SIMD the time cost will always be greater than n/m

For example for four cores with a SIMD_width of 4 (four 32bit floats with SSE) the cost would be 5n/16. Or about 3.2 times faster than sequential code which has a time cost of n. Using hyper threading the speed up will be greater still.

In special cases it's possible to use SIMD on the first pass as well. Then the time cost is simply

2*n/(m*w)

I posted the code for the general case which uses OpenMP for the threading and intrinsics for the SSE code and discuss details about the special case at the following link parallel-prefix-cumulative-sum-with-sse

Edit: I managed to find a SIMD version for the first pass which is about twice as fast as sequential code. Now I get a total boost of about 7 on my four core ivy bridge system.

Edit: For larger arrays one problem is that after the first pass most values have been evicted from the cache. I came up with a solution which runs in parallel inside a chunk but runs each chunk serially. The chunk_size is a value that should be tuned. For example I set it to 1MB = 256K floats. Now the second pass is done while the values are still inside the level-2 cache. Doing this gives a big improvement for large arrays.

Here is the code for SSE. The AVX code is about the same speed so I did not post it here. The function which does the prefix sum is scan_omp_SSEp2_SSEp1_chunk. Pass it an array a of floats and it fills the array s with the cumulative sum.

__m128 scan_SSE(__m128 x) {
    x = _mm_add_ps(x, _mm_castsi128_ps(_mm_slli_si128(_mm_castps_si128(x), 4))); 
    x = _mm_add_ps(x, _mm_shuffle_ps(_mm_setzero_ps(), x, 0x40)); 
    return x;
}

float pass1_SSE(float *a, float *s, const int n) {
    __m128 offset = _mm_setzero_ps();
    #pragma omp for schedule(static) nowait
    for (int i = 0; i < n / 4; i++) {
        __m128 x = _mm_load_ps(&a[4 * i]);
        __m128 out = scan_SSE(x);
        out = _mm_add_ps(out, offset);
        _mm_store_ps(&s[4 * i], out);
        offset = _mm_shuffle_ps(out, out, _MM_SHUFFLE(3, 3, 3, 3));
    }
    float tmp[4];
    _mm_store_ps(tmp, offset);
    return tmp[3];
}

void pass2_SSE(float *s, __m128 offset, const int n) {
    #pragma omp for schedule(static)
    for (int i = 0; i<n/4; i++) {
        __m128 tmp1 = _mm_load_ps(&s[4 * i]);
        tmp1 = _mm_add_ps(tmp1, offset);
        _mm_store_ps(&s[4 * i], tmp1);
    }
}

void scan_omp_SSEp2_SSEp1_chunk(float a[], float s[], int n) {
    float *suma;
    const int chunk_size = 1<<18;
    const int nchunks = n%chunk_size == 0 ? n / chunk_size : n / chunk_size + 1;
    //printf("nchunks %d\n", nchunks);
    #pragma omp parallel
    {
        const int ithread = omp_get_thread_num();
        const int nthreads = omp_get_num_threads();

        #pragma omp single
        {
            suma = new float[nthreads + 1];
            suma[0] = 0;
        }

        float offset2 = 0.0f;
        for (int c = 0; c < nchunks; c++) {
            const int start = c*chunk_size;
            const int chunk = (c + 1)*chunk_size < n ? chunk_size : n - c*chunk_size;
            suma[ithread + 1] = pass1_SSE(&a[start], &s[start], chunk);
            #pragma omp barrier
            #pragma omp single
            {
                float tmp = 0;
                for (int i = 0; i < (nthreads + 1); i++) {
                    tmp += suma[i];
                    suma[i] = tmp;
                }
            }
            __m128 offset = _mm_set1_ps(suma[ithread]+offset2);
            pass2_SSE(&s[start], offset, chunk);
            #pragma omp barrier
            offset2 = s[start + chunk-1];
        }
    }
    delete[] suma;
}

You can exploit some minor parallelism for large register lengths and small sums. For instance, adding up 16 values of 1 byte (which happen to fit into one sse register) requires only log216 additions and an equal number of shifts.
Not much, but faster than 15 depended additions and the additional memory accesses.

__m128i x = _mm_set_epi8(3,1,7,0,4,1,6,3,3,1,7,0,4,1,6,3);
x = _mm_add_epi8(x, _mm_srli_si128(x, 1));
x = _mm_add_epi8(x, _mm_srli_si128(x, 2));
x = _mm_add_epi8(x, _mm_srli_si128(x, 4));
x = _mm_add_epi8(x, _mm_srli_si128(x, 8));

// x == 3, 4, 11, 11, 15, 16, 22, 25, 28, 29, 36, 36, 40, 41, 47, 50

If you have longer sums, the dependencies could be hidden by exploiting instruction level parallelism and taking advantage of instruction reordering.

Edit: something like

__m128i x0 = _mm_set_epi8(3,1,7,0,4,1,6,3,3,1,7,0,4,1,6,3);
__m128i x1 = _mm_set_epi8(3,1,7,0,4,1,6,3,3,1,7,0,4,1,6,3);
__m128i x2 = _mm_set_epi8(3,1,7,0,4,1,6,3,3,1,7,0,4,1,6,3);
__m128i x3 = _mm_set_epi8(3,1,7,0,4,1,6,3,3,1,7,0,4,1,6,3);

__m128i mask = _mm_set_epi8(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0);

x0 = _mm_add_epi8(x0, _mm_srli_si128(x0, 1));
x1 = _mm_add_epi8(x1, _mm_srli_si128(x1, 1));
x2 = _mm_add_epi8(x2, _mm_srli_si128(x2, 1));
x3 = _mm_add_epi8(x3, _mm_srli_si128(x3, 1));

x0 = _mm_add_epi8(x0, _mm_srli_si128(x0, 2));
x1 = _mm_add_epi8(x1, _mm_srli_si128(x1, 2));
x2 = _mm_add_epi8(x2, _mm_srli_si128(x2, 2));
x3 = _mm_add_epi8(x3, _mm_srli_si128(x3, 2));

x0 = _mm_add_epi8(x0, _mm_srli_si128(x0, 4));
x1 = _mm_add_epi8(x1, _mm_srli_si128(x1, 4));
x2 = _mm_add_epi8(x2, _mm_srli_si128(x2, 4));
x3 = _mm_add_epi8(x3, _mm_srli_si128(x3, 4));

x0 = _mm_add_epi8(x0, _mm_srli_si128(x0, 8));
x1 = _mm_add_epi8(x1, _mm_srli_si128(x1, 8));
x2 = _mm_add_epi8(x2, _mm_srli_si128(x2, 8));
x3 = _mm_add_epi8(x3, _mm_srli_si128(x3, 8));

x1 = _mm_add_epi8(_mm_shuffle_epi8(x0, mask), x1);
x2 = _mm_add_epi8(_mm_shuffle_epi8(x1, mask), x2);
x3 = _mm_add_epi8(_mm_shuffle_epi8(x2, mask), x3);

prefix-sum can be computed in parallel, it's actually one of the foundational algorithms in GPU programming. If you're using SIMD extensions on an Intel processor I'm not sure if doing it in parallel will actually benefit you much, but take a look at this paper from nvidia on implementing parallel prefix-sum (just look at the algorithms and ignore the CUDA): Parallel Prefix Sum (Scan) with CUDA.


For an array of 1000 32bit integers, I was able to get a small speedup of about 1.4x single-threaded, using @hirschhornsalz's method in a loop on Intel Sandybridge. With a 60kiB buffer of ints, the speedup is about 1.37. With 8MiB of ints, the speedup is still 1.13. (i5-2500k at 3.8GHz turbo, with DDR3-1600.)

Smaller elements (int16_t or uint8_t, or the unsigned versions) would take an extra stage of shift/add for each doubling of the number of elements per vector. Overflow is bad, so don't try to use a data type that can't hold the sum of all elements, even though it gives SSE a bigger advantage.

#include <immintrin.h>

// In-place rewrite an array of values into an array of prefix sums.
// This makes the code simpler, and minimizes cache effects.
int prefix_sum_sse(int data[], int n)
{
//    const int elemsz = sizeof(data[0]);
#define elemsz sizeof(data[0])   // clang-3.5 doesn't allow compile-time-const int as an imm8 arg to intrinsics

    __m128i *datavec = (__m128i*)data;
    const int vec_elems = sizeof(*datavec)/elemsz;
    // to use this for int8/16_t, you still need to change the add_epi32, and the shuffle

    const __m128i *endp = (__m128i*) (data + n - 2*vec_elems);  // don't start an iteration beyond this
    __m128i carry = _mm_setzero_si128();
    for(; datavec <= endp ; datavec += 2) {
        IACA_START
        __m128i x0 = _mm_load_si128(datavec + 0);
        __m128i x1 = _mm_load_si128(datavec + 1); // unroll / pipeline by 1
//      __m128i x2 = _mm_load_si128(datavec + 2);
//      __m128i x3;

        x0 = _mm_add_epi32(x0, _mm_slli_si128(x0, elemsz));  // for floats, use shufps not bytewise-shift
        x1 = _mm_add_epi32(x1, _mm_slli_si128(x1, elemsz));

        x0 = _mm_add_epi32(x0, _mm_slli_si128(x0, 2*elemsz));
        x1 = _mm_add_epi32(x1, _mm_slli_si128(x1, 2*elemsz));

    // more shifting if vec_elems is larger

        x0 = _mm_add_epi32(x0, carry);  // this has to go after the byte-shifts, to avoid double-counting the carry.
        _mm_store_si128(datavec +0, x0); // store first to allow destructive shuffle (non-avx pshufb if needed)

        x1 = _mm_add_epi32(_mm_shuffle_epi32(x0, _MM_SHUFFLE(3,3,3,3)), x1);
        _mm_store_si128(datavec +1, x1);

        carry = _mm_shuffle_epi32(x1, _MM_SHUFFLE(3,3,3,3)); // broadcast the high element for next vector
    }
    // FIXME: scalar loop to handle the last few elements
    IACA_END
    return data[n-1];
    #undef elemsz
}

int prefix_sum_simple(int data[], int n)
{
    int sum=0;
    for (int i=0; i<n ; i++) {
        IACA_START
        sum += data[i];
        data[i] = sum;
    }
    IACA_END
    return sum;
}

// perl -we '$n=1000; sub rnlist($$) { return map { int rand($_[1]) } ( 1..$_[0] );}  @a=rnlist($n,127);   $"=", "; print "$n\n@a\n";'

int data[] = { 51, 83, 126, 11,   20, 63, 113, 102,
       126,67, 83, 113,   86, 123, 30, 109,
       97, 71, 109, 86,   67, 60,  47, 12,
       /* ... */ };


int main(int argc, char**argv)
{
    const int elemsz = sizeof(data[0]);
    const int n = sizeof(data)/elemsz;
    const long reps = 1000000 * 1000 / n;
    if (argc >= 2 && *argv[1] == 'n') {
        for (int i=0; i < reps ; i++)
            prefix_sum_simple(data, n);
    }else {
        for (int i=0; i < reps ; i++)
            prefix_sum_sse(data, n);
    }
    return 0;
}

Testing with n=1000, with the list compiled into the binary. (And yes, I checked that it's actually looping, not taking any compile-time shortcuts that make the vector or non-vector test meaningless.)

Note that compiling with AVX to get 3-operand non-destructive vector instructions saves a lot of movdqa instructions, but only saves a tiny amount of cycles. This is because shuffle and vector-int-add can both only run on ports 1 and 5, on SnB/IvB, so port0 has plenty of spare cycles to run the mov instructions. uop-cache throughput bottlenecks might be the reason why the non-AVX version is slightly slower. (All those extra mov instructions push us up to 3.35 insn/cycle). The frontend is only idle 4.54% of cycles, so it's barely keeping up.

gcc -funroll-loops -DIACA_MARKS_OFF -g -std=c11 -Wall -march=native -O3 prefix-sum.c -mno-avx -o prefix-sum-noavx

  # gcc 4.9.2

################# SSE (non-AVX) vector version ############
$ ocperf.py stat -e task-clock,cycles,instructions,uops_issued.any,uops_dispatched.thread,uops_retired.all,uops_retired.retire_slots,stalled-cycles-frontend,stalled-cycles-backend ./prefix-sum-noavx 
perf stat -e task-clock,cycles,instructions,cpu/event=0xe,umask=0x1,name=uops_issued_any/,cpu/event=0xb1,umask=0x1,name=uops_dispatched_thread/,cpu/event=0xc2,umask=0x1,name=uops_retired_all/,cpu/event=0xc2,umask=0x2,name=uops_retired_retire_slots/,stalled-cycles-frontend,stalled-cycles-backend ./prefix-sum-noavx

 Performance counter stats for './prefix-sum-noavx':

        206.986720      task-clock (msec)         #    0.999 CPUs utilized          
       777,473,726      cycles                    #    3.756 GHz                    
     2,604,757,487      instructions              #    3.35  insns per cycle        
                                                  #    0.01  stalled cycles per insn
     2,579,310,493      uops_issued_any           # 12461.237 M/sec
     2,828,479,147      uops_dispatched_thread    # 13665.027 M/sec
     2,829,198,313      uops_retired_all          # 13668.502 M/sec (unfused domain)
     2,579,016,838      uops_retired_retire_slots # 12459.818 M/sec (fused domain)
        35,298,807      stalled-cycles-frontend   #    4.54% frontend cycles idle   
         1,224,399      stalled-cycles-backend    #    0.16% backend  cycles idle   

       0.207234316 seconds time elapsed
------------------------------------------------------------


######### AVX (same source, but built with -mavx).  not AVX2 #########
$ ocperf.py stat -e task-clock,cycles,instructions,uops_issued.any,uops_dispatched.thread,uops_retired.all,uops_retired.retire_slots,stalled-cycles-frontend,stalled-cycles-backend ./prefix-sum-avx

 Performance counter stats for './prefix-sum-avx':

        203.429021      task-clock (msec)         #    0.999 CPUs utilized          
       764,859,441      cycles                    #    3.760 GHz                    
     2,079,716,097      instructions              #    2.72  insns per cycle        
                                                  #    0.12  stalled cycles per insn
     2,054,334,040      uops_issued_any           # 10098.530 M/sec                  
     2,303,378,797      uops_dispatched_thread    # 11322.764 M/sec                  
     2,304,140,578      uops_retired_all          # 11326.509 M/sec                  
     2,053,968,862      uops_retired_retire_slots # 10096.735 M/sec                  
       240,883,566      stalled-cycles-frontend   #   31.49% frontend cycles idle   
         1,224,637      stalled-cycles-backend    #    0.16% backend  cycles idle   

       0.203732797 seconds time elapsed
------------------------------------------------------------


################## scalar version (cmdline arg) #############    
$ ocperf.py stat -e task-clock,cycles,instructions,uops_issued.any,uops_dispatched.thread,uops_retired.all,uops_retired.retire_slots,stalled-cycles-frontend,stalled-cycles-backend ./prefix-sum-avx n

 Performance counter stats for './prefix-sum-avx n':

        287.567070      task-clock (msec)         #    0.999 CPUs utilized          
     1,082,611,453      cycles                    #    3.765 GHz                    
     2,381,840,355      instructions              #    2.20  insns per cycle        
                                                  #    0.20  stalled cycles per insn
     2,272,652,370      uops_issued_any           # 7903.034 M/sec                  
     4,262,838,836      uops_dispatched_thread    # 14823.807 M/sec                  
     4,256,351,856      uops_retired_all          # 14801.249 M/sec                  
     2,256,150,510      uops_retired_retire_slots # 7845.650 M/sec                  
       465,018,146      stalled-cycles-frontend   #   42.95% frontend cycles idle   
         6,321,098      stalled-cycles-backend    #    0.58% backend  cycles idle   

       0.287901811 seconds time elapsed

------------------------------------------------------------    

Haswell should be about the same, but maybe slightly slower per-clock, because shuffle can only run on port 5, not port 1. (vector-int add is still p1/5 on Haswell.)

OTOH, IACA thinks Haswell will be slightly faster than SnB for one iteration, if you compile without -funroll-loops (which does help on SnB). Haswell can do branches on port6, but on SnB branches are on port5, which we already saturate.

 # compile without -DIACA_MARKS_OFF
$ iaca -64 -mark 1 -arch HSW prefix-sum-avx    
Intel(R) Architecture Code Analyzer Version - 2.1
Analyzed File - prefix-sum-avx
Binary Format - 64Bit
Architecture  - HSW
Analysis Type - Throughput

*******************************************************************
Intel(R) Architecture Code Analyzer Mark Number 1
*******************************************************************

Throughput Analysis Report
--------------------------
Block Throughput: 6.20 Cycles       Throughput Bottleneck: Port5

Port Binding In Cycles Per Iteration:
---------------------------------------------------------------------------------------
|  Port  |  0   -  DV  |  1   |  2   -  D   |  3   -  D   |  4   |  5   |  6   |  7   |
---------------------------------------------------------------------------------------
| Cycles | 1.0    0.0  | 5.8  | 1.4    1.0  | 1.4    1.0  | 2.0  | 6.2  | 1.0  | 1.3  |
---------------------------------------------------------------------------------------

N - port number or number of cycles resource conflict caused delay, DV - Divider pipe (on port 0)
D - Data fetch pipe (on ports 2 and 3), CP - on a critical path
F - Macro Fusion with the previous instruction occurred
* - instruction micro-ops not bound to a port
^ - Micro Fusion happened
# - ESP Tracking sync uop was issued
@ - SSE instruction followed an AVX256 instruction, dozens of cycles penalty is expected
! - instruction not supported, was not accounted in Analysis

| Num Of |                    Ports pressure in cycles                     |    |
|  Uops  |  0  - DV  |  1  |  2  -  D  |  3  -  D  |  4  |  5  |  6  |  7  |    |
---------------------------------------------------------------------------------
|   1    |           |     | 1.0   1.0 |           |     |     |     |     |    | vmovdqa xmm2, xmmword ptr [rax]
|   1    | 1.0       |     |           |           |     |     |     |     |    | add rax, 0x20
|   1    |           |     |           | 1.0   1.0 |     |     |     |     |    | vmovdqa xmm3, xmmword ptr [rax-0x10]
|   1    |           |     |           |           |     | 1.0 |     |     | CP | vpslldq xmm1, xmm2, 0x4
|   1    |           | 1.0 |           |           |     |     |     |     |    | vpaddd xmm2, xmm2, xmm1
|   1    |           |     |           |           |     | 1.0 |     |     | CP | vpslldq xmm1, xmm3, 0x4
|   1    |           | 1.0 |           |           |     |     |     |     |    | vpaddd xmm3, xmm3, xmm1
|   1    |           |     |           |           |     | 1.0 |     |     | CP | vpslldq xmm1, xmm2, 0x8
|   1    |           | 1.0 |           |           |     |     |     |     |    | vpaddd xmm2, xmm2, xmm1
|   1    |           |     |           |           |     | 1.0 |     |     | CP | vpslldq xmm1, xmm3, 0x8
|   1    |           | 1.0 |           |           |     |     |     |     |    | vpaddd xmm3, xmm3, xmm1
|   1    |           | 0.9 |           |           |     | 0.2 |     |     | CP | vpaddd xmm1, xmm2, xmm0
|   2^   |           |     |           |           | 1.0 |     |     | 1.0 |    | vmovaps xmmword ptr [rax-0x20], xmm1
|   1    |           |     |           |           |     | 1.0 |     |     | CP | vpshufd xmm1, xmm1, 0xff
|   1    |           | 0.9 |           |           |     | 0.1 |     |     | CP | vpaddd xmm0, xmm1, xmm3
|   2^   |           |     | 0.3       | 0.3       | 1.0 |     |     | 0.3 |    | vmovaps xmmword ptr [rax-0x10], xmm0
|   1    |           |     |           |           |     | 1.0 |     |     | CP | vpshufd xmm0, xmm0, 0xff
|   1    |           |     |           |           |     |     | 1.0 |     |    | cmp rax, 0x602020
|   0F   |           |     |           |           |     |     |     |     |    | jnz 0xffffffffffffffa3
Total Num Of Uops: 20

BTW, gcc compiled the loop to use a one-register addressing mode even when I had a loop counter and was doing load(datavec + i + 1). That's the best code, esp. on SnB-family where 2-register addressing modes can't micro-fuse, so I change the source to that loop condition for the benefit of clang.


NOTE: in C++ standard "prefix sum" is called "inclusive scan", so this is what we call it.

We have ported and generalised the SIMD part of @Z bozon 's answer (Thanks for the fantastic work!) to all x86 (sse - avx512) and arm (neon/aarch-64) in eve library. It's open source and MIT licensed.

NOTE: we support only version of AVX-512 that matches skylake-avx512. If your machine does not support all the requirements, we will use avx2.

We also support doing this over parallel arrays, so for example you can do an inclusive scan for complex numbers: example.

Here you can see the asm we generate for different architectures (you can change the T type for different types): godbolt. In case the links get stale, arm-64, avx-2.

Here are some numbers for int, for different x86 architectures, compared to the scalar code on 10'000 bytes of data. Processor intel-9700k.

NOTE: unfortunately, no benchmarks for arm at this point.

int results

On sse2-sse4 we are about 1.6-1.7 times faster, on avx2 - about 2 times.

The algorithm's limit is cardinal / log(cardinal) - so 4 / 2 - 2 times on sse2-sse4.2 and 8 / 3 - 2.7 times on avx2 - we are not too far off.

What about threads?

In eve library we don't directly deal with threads, however we have inclusive_scan and transform which are the building blocks to do the parallel version.

Here is my sketch of parallel / vectorised version on top of eve. You'd need some decent threading library though, I used std::async/std::future in the example, which are bad.

Other relevant capabilities

You can use inclusive_scan_to instead of inclusive_scan_inplace, no problem if you want to keep the original data. inlcusive_scan_to

We also have support different types (same way as the standard inclusive scan) - so you can sum floats to doubles and similar. floats to doubles

We support custom plus operations, so you can use min if you want to. I already mentioned zip and ability to scan multiple arrays in one go. example

Feel free to create an issue on the library if you want to try it out and need help.