Subtracting packed 8-bit integers in an 64-bit integer by 1 in parallel, SWAR without hardware SIMD

C++CBit ManipulationSimdSwar

C++ Problem Overview


If I have a 64-bit integer that I'm interpreting as an array of packed 8-bit integers with 8 elements. I need to subtract the constant 1 from each packed integer while handling overflow without the result of one element affecting the result of another element.

I have this code at the moment and it works but I need a solution that does the subtraction of each packed 8-bit integer in parallel and doesn't make memory accesses. On x86 I could use SIMD instructions like psubb that subtracts packed 8-bit integers in parallel but the platform I'm coding for doesn't support SIMD instructions. (RISC-V in this case).

So I'm trying to do SWAR (SIMD within a register) to manually cancel out carry propagation between bytes of a uint64_t, doing something equivalent to this:

uint64_t sub(uint64_t arg) {
    uint8_t* packed = (uint8_t*) &arg;

    for (size_t i = 0; i < sizeof(uint64_t); ++i) {
        packed[i] -= 1;
    }

    return arg;
}

I think you could do this with bitwise operators but I'm not sure. I'm looking for a solution that doesn't use SIMD instructions. I'm looking for a solution in C or C++ that's quite portable or just the theory behind it so I can implement my own solution.

C++ Solutions


Solution 1 - C++

If you have a CPU with efficient SIMD instructions, SSE/MMX paddb (_mm_add_epi8) is also viable. Peter Cordes' answer also describes GNU C (gcc/clang) vector syntax, and safety for strict-aliasing UB. I strongly encourage reviewing that answer as well.

Doing it yourself with uint64_t is fully portable, but still requires care to avoid alignment problems and strict-aliasing UB when accessing a uint8_t array with a uint64_t*. You left that part out of the question by starting with your data in a uint64_t already, but for GNU C a may_alias typedef solves the problem (see Peter's answer for that or memcpy).

Otherwise you could allocate / declare your data as uint64_t and access it via uint8_t* when you want individual bytes. unsigned char* is allowed to alias anything so that sidesteps the problem for the specific case of 8-bit elements. (If uint8_t exists at all, it's probably safe to assume it's an unsigned char.)


Note that this is a change from a prior incorrect algorithm (see revision history).

This is possible without looping for arbitrary subtraction, and gets more efficient for a known constant like 1 in each byte. The main trick is to prevent carry-out from each byte by setting the high bit, then correct the subtraction result.

We are going to slightly optimize the subtraction technique given here. They define:

> > SWAR sub z = x - y z = ((x | H) - (y &~H)) ^ ((x ^~y) & H) >

with H defined as 0x8080808080808080U (i.e. the MSBs of each packed integer). For a decrement, y is 0x0101010101010101U.

We know that y has all of its MSBs clear, so we can skip one of the mask steps (i.e. y & ~H is the same as y in our case). The calculation proceeds as follows:

  1. We set the MSBs of each component of x to 1, so that a borrow cannot propagate past the MSB to the next component. Call this the adjusted input.
  2. We subtract 1 from each component, by subtracting 0x01010101010101 from the corrected input. This does not cause inter-component borrows thanks to step 1. Call this the adjusted output.
  3. We need to now correct the MSB of the result. We xor the adjusted output with the inverted MSBs of the original input to finish fixing up the result.
The operation can be written as:
#define U64MASK 0x0101010101010101U
#define MSBON 0x8080808080808080U
uint64_t decEach(uint64_t i){
      return ((i | MSBON) - U64MASK) ^ ((i ^ MSBON) & MSBON);
}

Preferably, this is inlined by the compiler (use compiler directives to force this), or the expression is written inline as part of another function.

Testcases:

in:  0000000000000000
out: ffffffffffffffff

in:  f200000015000013
out: f1ffffff14ffff12

in:  0000000000000100
out: ffffffffffff00ff

in:  808080807f7f7f7f
out: 7f7f7f7f7e7e7e7e

in:  0101010101010101
out: 0000000000000000

Performance details

Here's the x86_64 assembly for a single invocation of the function. For better performance it should be inlined with the hope that the constants can live in a register as long as possible. In a tight loop where the constants live in a register, the actual decrement takes five instructions: or+not+and+add+xor after optimization. I don't see alternatives that would beat the compiler's optimization.

uint64t[rax] decEach(rcx):
    movabs  rcx, -9187201950435737472
    mov     rdx, rdi
    or      rdx, rcx
    movabs  rax, -72340172838076673
    add     rax, rdx
    and     rdi, rcx
    xor     rdi, rcx
    xor     rax, rdi
    ret

With some IACA testing of the following snippet:

// Repeat the SWAR dec in a loop as a microbenchmark
uint64_t perftest(uint64_t dummyArg){
    uint64_t dummyCounter = 0;
    uint64_t i = 0x74656a6d27080100U; // another dummy value.
    while(i ^ dummyArg) {
        IACA_START
        uint64_t naive = i - U64MASK;
        i = naive + ((i ^ naive ^ U64MASK) & U64MASK);
        dummyCounter++;
    }
    IACA_END
    return dummyCounter;
}


we can show that on a Skylake machine, performing the decrement, xor, and compare+jump can be performed at just under 5 cycles per iteration:

Throughput Analysis Report
--------------------------
Block Throughput: 4.96 Cycles       Throughput Bottleneck: Backend
Loop Count:  26
Port Binding In Cycles Per Iteration:
--------------------------------------------------------------------------------------------------
|  Port  |   0   -  DV   |   1   |   2   -  D    |   3   -  D    |   4   |   5   |   6   |   7   |
--------------------------------------------------------------------------------------------------
| Cycles |  1.5     0.0  |  1.5  |  0.0     0.0  |  0.0     0.0  |  0.0  |  1.5  |  1.5  |  0.0  |
--------------------------------------------------------------------------------------------------

(Of course, on x86-64 you'd just load or movq into an XMM reg for paddb, so it might be more interesting to look at how it compiles for an ISA like RISC-V.)

Solution 2 - C++

For RISC-V you're probably using GCC/clang.

Fun fact: GCC knows some of these SWAR bithack tricks (shown in other answers) and can use them for you when compiling code with GNU C native vectors for targets without hardware SIMD instructions. (But clang for RISC-V will just naively unroll it to scalar operations, so you do have to do it yourself if you want good performance across compilers).

One advantage to native vector syntax is that when targeting a machine with hardware SIMD, it will use that instead of auto-vectorizing your bithack or something horrible like that.

It makes it easy to write vector -= scalar operations; the syntax Just Works, implicitly broadcasting aka splatting the scalar for you.


Also note that a uint64_t* load from a uint8_t array[] is strict-aliasing UB, so be careful with that. (See also https://stackoverflow.com/questions/57650895/why-does-glibcs-strlen-need-to-be-so-complicated-to-run-quickly/57676035#57676035 re: making SWAR bithacks strict-aliasing safe in pure C). You may want something like this to declare a uint64_t that you can pointer-cast to access any other objects, like how char* works in ISO C / C++.

use these to get uint8_t data into a uint64_t for use with other answers:

// GNU C: gcc/clang/ICC but not MSVC
typedef uint64_t  aliasing_u64 __attribute__((may_alias));  // still requires alignment
typedef uint64_t  aliasing_unaligned_u64 __attribute__((may_alias, aligned(1)));

The other way to do aliasing-safe loads is with memcpy into a uint64_t, which also removes the alignof(uint64_t) alignment requirement. But on ISAs without efficient unaligned loads, gcc/clang don't inline and optimize away memcpy when they can't prove the pointer is aligned, which would be disastrous for performance.

TL:DR: your best bet is to declare you data as uint64_t array[...] or allocate it dynamically as uint64_t, or preferably alignas(16) uint64_t array[]; That ensures alignment to at least 8 bytes, or 16 if you specify alignas.

Since uint8_t is almost certainly unsigned char*, it's safe to access the bytes of a uint64_t via uint8_t* (but not vice versa for a uint8_t array). So for this special case where the narrow element type is unsigned char, you can sidestep the strict-aliasing problem because char is special.


GNU C native vector syntax example:

GNU C native vectors are always allowed to alias with their underlying type (e.g. int __attribute__((vector_size(16))) can safely alias int but not float or uint8_t or anything else.

#include <stdint.h>
#include <stddef.h>

// assumes array is 16-byte aligned
void dec_mem_gnu(uint8_t *array) {
    typedef uint8_t v16u8 __attribute__ ((vector_size (16), may_alias));
    v16u8 *vecs = (v16u8*) array;
    vecs[0] -= 1;
    vecs[1] -= 1;   // can be done in a loop.
}

For RISC-V without any HW SIMD, you could use vector_size(8) to express just the granularity you can efficiently use, and do twice as many smaller vectors.

But vector_size(8) compiles very stupidly for x86 with both GCC and clang: GCC uses SWAR bithacks in GP-integer registers, clang unpacks to 2-byte elements to fill a 16-byte XMM register then repacks. (MMX is so obsolete that GCC/clang don't even bother using it, at least not for x86-64.)

But with vector_size (16) (Godbolt) we get the expected movdqa / paddb. (With an all-ones vector generated by pcmpeqd same,same). With -march=skylake we still get two separate XMM ops instead of one YMM, so unfortunately current compilers also don't "auto-vectorize" vector ops into wider vectors :/

For AArch64, it's not so bad to use vector_size(8) (Godbolt); ARM/AArch64 can natively work in 8 or 16-byte chunks with d or q registers.

So you probably want vector_size(16) to actually compile with if you want portable performance across x86, RISC-V, ARM/AArch64, and POWER. However, some other ISAs do SIMD within 64-bit integer registers, like MIPS MSA I think.

vector_size(8) makes it easier to look at the asm (only one register worth of data): Godbolt compiler explorer

# GCC8.2 -O3 for RISC-V for vector_size(8) and only one vector

dec_mem_gnu(unsigned char*):
        lui     a4,%hi(.LC1)           # generate address for static constants.
        ld      a5,0(a0)                 # a5 = load from function arg
        ld      a3,%lo(.LC1)(a4)       # a3 = 0x7F7F7F7F7F7F7F7F
        lui     a2,%hi(.LC0)
        ld      a2,%lo(.LC0)(a2)       # a2 = 0x8080808080808080
                             # above here can be hoisted out of loops
        not     a4,a5                  # nx = ~x
        and     a5,a5,a3               # x &= 0x7f... clear high bit
        and     a4,a4,a2               # nx = (~x) & 0x80... inverse high bit isolated
        add     a5,a5,a3               # x += 0x7f...   (128-1)
        xor     a5,a4,a5               # x ^= nx  restore high bit or something.

        sd      a5,0(a0)               # store the result
        ret

I think it's the same basic idea as the other non-looping answers; preventing carry then fixing up the result.

This is 5 ALU instructions, worse than the top answer I think. But it looks like critical path latency is only 3 cycles, with two chains of 2 instructions each leading to the XOR. @Reinstate Monica - ζ--'s answer compiles to a 4-cycle dep chain (for x86). The 5-cycle loop throughput is bottlenecked by also including a naive sub on the critical path, and the loop does bottleneck on latency.

However, this is useless with clang. It doesn't even add and store in the same order it loaded so it's not even doing good software pipelining!

# RISC-V clang (trunk) -O3
dec_mem_gnu(unsigned char*):
        lb      a6, 7(a0)
        lb      a7, 6(a0)
        lb      t0, 5(a0)
...
        addi    t1, a5, -1
        addi    t2, a1, -1
        addi    t3, a2, -1
...
        sb      a2, 7(a0)
        sb      a1, 6(a0)
        sb      a5, 5(a0)
...
        ret

Solution 3 - C++

I'd point out that the code you've written does actually vectorize once you start dealing with more than a single uint64_t.

https://godbolt.org/z/J9DRzd

Solution 4 - C++

You can make sure the subtraction doesn't overflow and then fix up the high bit:

uint64_t sub(uint64_t arg) {
    uint64_t x1 = arg | 0x80808080808080;
    uint64_t x2 = ~arg & 0x80808080808080;
    // or uint64_t x2 = arg ^ x1; to save one instruction if you don't have an andnot instruction
    return (x1 - 0x101010101010101) ^ x2;
}

Solution 5 - C++

Not sure if this is what you want but it does the 8 subtractions in parallel to each other:

#include <cstdint>

constexpr uint64_t mask = 0x0101010101010101;

uint64_t sub(uint64_t arg) {
    uint64_t mask_cp = mask;
    for(auto i = 0; i < 8 && mask_cp; ++i) {
        uint64_t new_mask = (arg & mask_cp) ^ mask_cp;
        arg = arg ^ mask_cp;
        mask_cp = new_mask << 1;
    }
    return arg;
}

Explanation: The bitmask starts with a 1 in each of the 8-bit numbers. We xor it with our argument. If we had a 1 in this place, we subtracted 1 and have to stop. This is done by setting the corresponding bit to 0 in new_mask. If we had a 0, we set it to 1 and have to do the carry, so the bit stays 1 and we shift the mask to the left. You better check for yourself if the generation of the new mask works as intended, I think so, but a second opinion would not be bad.

PS: I am actually unsure if the check on mask_cp being not null in the loop may slows the program down. Without it, the code would still be correct (since the 0 mask just does nothing) and it would be much easier for the compiler to do loop unrolling.

Solution 6 - C++

int subtractone(int x) 
{
    int f = 1; 
  
    // Flip all the set bits until we find a 1 at position y
    while (!(x & f)) { 
        x = x^f; 
        f <<= 1; 
    } 
  
    return x^f; // return answer but remember to flip the 1 at y
} 

You can do it with bitwise operations using the above, and you just have to divide your integer into 8 bit pieces to send 8 times into this function. The following part was taken from https://stackoverflow.com/questions/20041899/how-to-split-a-64-bit-number-into-eight-8-bit-values with me adding in the above function

uint64_t v= _64bitVariable;
uint8_t i=0,parts[8]={0};
do parts[i++] = subtractone(v&0xFF); while (v>>=8);

It is valid C or C++ regardless of how someone comes across this

Solution 7 - C++

Not going to try to come up with the code, but for a decrement by 1 you could decrement by the group of 8 1s and then check to be sure that the LSBs of the results had "flipped". Any LSB that hasn't toggled indicates that a carry occurred from the adjacent 8 bits. It should be possible to work out a sequence of ANDs/ORs/XORs to handle this, without any branches.

Solution 8 - C++

Focus work on each byte fully alone, then put it back where it was.

uint64_t sub(uint64_t arg) {
   uint64_t res = 0;
 
   for (int i = 0; i < 64; i+=8) 
     res += ((arg >> i) - 1 & 0xFFU) << i;
     
    return res;
   }

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questioncam-whiteView Question on Stackoverflow
Solution 1 - C++nanofaradView Answer on Stackoverflow
Solution 2 - C++Peter CordesView Answer on Stackoverflow
Solution 3 - C++robtheblokeView Answer on Stackoverflow
Solution 4 - C++Falk HüffnerView Answer on Stackoverflow
Solution 5 - C++n314159View Answer on Stackoverflow
Solution 6 - C++LTPCGOView Answer on Stackoverflow
Solution 7 - C++Hot LicksView Answer on Stackoverflow
Solution 8 - C++user12450543View Answer on Stackoverflow