# Extracting bits with a single multiplication

CMultiplicationBit Manipulation## C Problem Overview

I saw an interesting technique used in an answer to another question, and would like to understand it a little better.

We're given an unsigned 64-bit integer, and we are interested in the following bits:

```
1.......2.......3.......4.......5.......6.......7.......8.......
```

Specifically, we'd like to move them to the top eight positions, like so:

```
12345678........................................................
```

We don't care about the value of the bits indicated by `.`

, and they don't have to be preserved.

The solution was to mask out the unwanted bits, and multiply the result by `0x2040810204081`

. This, as it turns out, does the trick.

How general is this method? Can this technique be used to extract any subset of bits? If not, how does one figure out whether or not the method works for a particular set of bits?

Finally, how would one go about finding the (a?) correct multiplier to extract the given bits?

## C Solutions

## Solution 1 - C

Very interesting question, and clever trick.

Let's look at a simple example of getting a single byte manipulated. Using unsigned 8 bit for simplicity. Imagine your number is `xxaxxbxx`

and you want `ab000000`

.

The solution consisted of two steps: a bit masking, followed by multiplication. The bit mask is a simple AND operation that turns uninteresting bits to zeros. In the above case, your mask would be `00100100`

and the result `00a00b00`

.

Now the hard part: turning that into `ab......`

.

A multiplication is a bunch of shift-and-add operations. The key is to allow overflow to "shift away" the bits we don't need and put the ones we want in the right place.

Multiplication by 4 (`00000100`

) would shift everything left by 2 and get you to `a00b0000`

. To get the `b`

to move up we need to multiply by 1 (to keep the a in the right place) + 4 (to move the b up). This sum is 5, and combined with the earlier 4 we get a magic number of 20, or `00010100`

. The original was `00a00b00`

after masking; the multiplication gives:

```
000000a00b000000
00000000a00b0000 +
----------------
000000a0ab0b0000
xxxxxxxxab......
```

From this approach you can extend to larger numbers and more bits.

One of the questions you asked was "can this be done with any number of bits?" I think the answer is "no", unless you allow several masking operations, or several multiplications. The problem is the issue of "collisions" - for example, the "stray b" in the problem above. Imagine we need to do this to a number like `xaxxbxxcx`

. Following the earlier approach, you would think we need {x 2, x {1 + 4 + 16}} = x 42 (oooh - the answer to everything!). Result:

```
00000000a00b00c00
000000a00b00c0000
0000a00b00c000000
-----------------
0000a0ababcbc0c00
xxxxxxxxabc......
```

As you can see, it still works, but "only just". They key here is that there is "enough space" between the bits we want that we can squeeze everything up. I could not add a fourth bit d right after c, because I would get instances where I get c+d, bits might carry, ...

So without formal proof, I would answer the more interesting parts of your question as follows: "No, this will not work for any number of bits. To extract N bits, you need (N-1) spaces between the bits you want to extract, or have additional mask-multiply steps."

The only exception I can think of for the "must have (N-1) zeros between bits" rule is this: if you want to extract two bits that are adjacent to each other in the original, AND you want to keep them in the same order, then you can still do it. And for the purpose of the (N-1) rule they count as two bits.

There is another insight - inspired by the answer of @Ternary below (see my comment there). For each interesting bit, you only need as many zeros to the right of it as you need space for bits that need to go there. But also, it needs as many bits to the left as it has result-bits to the left. So if a bit b ends up in position m of n, then it needs to have m-1 zeros to its left, and n-m zeros to its right. Especially when the bits are not in the same order in the original number as they will be after the re-ordering, this is an important improvement to the original criteria. This means, for example, that a 16 bit word

```
a...e.b...d..c..
```

Can be shifted into

```
abcde...........
```

even though there is only one space between e and b, two between d and c, three between the others. Whatever happened to N-1?? In this case, `a...e`

becomes "one block" - they are multiplied by 1 to end up in the right place, and so "we got e for free". The same is true for b and d (b needs three spaces to the right, d needs the same three to its left). So when we compute the magic number, we find there are duplicates:

```
a: << 0 ( x 1 )
b: << 5 ( x 32 )
c: << 11 ( x 2048 )
d: << 5 ( x 32 ) !! duplicate
e: << 0 ( x 1 ) !! duplicate
```

Clearly, if you wanted these numbers in a different order, you would have to space them further. We can reformulate the `(N-1)`

rule: "It will always work if there are at least (N-1) spaces between bits; or, if the order of bits in the final result is known, then if a bit b ends up in position m of n, it needs to have m-1 zeros to its left, and n-m zeros to its right."

@Ternary pointed out that this rule doesn't quite work, as there can be a carry from bits adding "just to the right of the target area" - namely, when the bits we're looking for are all ones. Continuing the example I gave above with the five tightly packed bits in a 16 bit word: if we start with

```
a...e.b...d..c..
```

For simplicity, I will name the bit positions `ABCDEFGHIJKLMNOP`

The math we were going to do was

```
ABCDEFGHIJKLMNOP
a000e0b000d00c00
0b000d00c0000000
000d00c000000000
00c0000000000000 +
----------------
abcded(b+c)0c0d00c00
```

Until now, we thought anything below `abcde`

(positions `ABCDE`

) would not matter, but in fact, as @Ternary pointed out, if `b=1, c=1, d=1`

then `(b+c)`

in position `G`

will cause a bit to carry to position `F`

, which means that `(d+1)`

in position `F`

will carry a bit into `E`

- and our result is spoilt. Note that space to the right of the least significant bit of interest (`c`

in this example) doesn't matter, since the multiplication will cause padding with zeros from beyone the least significant bit.

So we need to modify our (m-1)/(n-m) rule. If there is more than one bit that has "exactly (n-m) unused bits to the right (not counting the last bit in the pattern - "c" in the example above), then we need to strengthen the rule - and we have to do so iteratively!

We have to look not only at the number of bits that meet the (n-m) criterion, but also the ones that are at (n-m+1), etc. Let's call their number Q0 (exactly `n-m`

to next bit), Q1 (n-m+1), up to Q(N-1) (n-1). Then we risk carry if

```
Q0 > 1
Q0 == 1 && Q1 >= 2
Q0 == 0 && Q1 >= 4
Q0 == 1 && Q1 > 1 && Q2 >=2
...
```

If you look at this, you can see that if you write a simple mathematical expression

```
W = N * Q0 + (N - 1) * Q1 + ... + Q(N-1)
```

and the result is `W > 2 * N`

, then you need to increase the RHS criterion by one bit to `(n-m+1)`

. At this point, the operation is safe as long as `W < 4`

; if that doesn't work, increase the criterion one more, etc.

I think that following the above will get you a long way to your answer...

## Solution 2 - C

Very interesting question indeed. I'm chiming in with my two cents, which is that, if you can manage to state problems like this in terms of first-order logic over the bitvector theory, then theorem provers are your friend, and can potentially provide you with very fast answers to your questions. Let's re-state the problem being asked as a theorem:

"There exists some 64-bit constants 'mask' and 'multiplicand' such that, for all 64-bit bitvectors x, in the expression y = (x & mask) * multiplicand, we have that y.63 == x.63, y.62 == x.55, y.61 == x.47, etc."

If this sentence is in fact a theorem, then it is true that some values of the constants 'mask' and 'multiplicand' satisfy this property. So let's phrase this in terms of something that a theorem prover can understand, namely SMT-LIB 2 input:

```
(set-logic BV)
(declare-const mask (_ BitVec 64))
(declare-const multiplicand (_ BitVec 64))
(assert
(forall ((x (_ BitVec 64)))
(let ((y (bvmul (bvand mask x) multiplicand)))
(and
(= ((_ extract 63 63) x) ((_ extract 63 63) y))
(= ((_ extract 55 55) x) ((_ extract 62 62) y))
(= ((_ extract 47 47) x) ((_ extract 61 61) y))
(= ((_ extract 39 39) x) ((_ extract 60 60) y))
(= ((_ extract 31 31) x) ((_ extract 59 59) y))
(= ((_ extract 23 23) x) ((_ extract 58 58) y))
(= ((_ extract 15 15) x) ((_ extract 57 57) y))
(= ((_ extract 7 7) x) ((_ extract 56 56) y))
)
)
)
)
(check-sat)
(get-model)
```

And now let's ask the theorem prover Z3 whether this is a theorem:

```
z3.exe /m /smt2 ExtractBitsThroughAndWithMultiplication.smt2
```

The result is:

```
sat
(model
(define-fun mask () (_ BitVec 64)
#x8080808080808080)
(define-fun multiplicand () (_ BitVec 64)
#x0002040810204081)
)
```

Bingo! It reproduces the result given in the original post in 0.06 seconds.

Looking at this from a more general perspective, we can view this as being an instance of a first-order program synthesis problem, which is a nascent area of research about which few papers have been published. A search for `"program synthesis" filetype:pdf`

should get you started.

## Solution 3 - C

Every 1-bit in the multiplier is used to copy one of the bits into its correct position:

`1`

is already in the correct position, so multiply by`0x0000000000000001`

.`2`

must be shifted 7 bit positions to the left, so we multiply by`0x0000000000000080`

(bit 7 is set).`3`

must be shifted 14 bit positions to the left, so we multiply by`0x0000000000000400`

(bit 14 is set).- and so on until
`8`

must be shifted 49 bit positions to the left, so we multiply by`0x0002000000000000`

(bit 49 is set).

The multiplier is the sum of the multipliers for the individual bits.

This only works because the bits to be collected are not too close together, so that the multiplication of bits which do not belong together in our scheme either fall beyond the 64 bit or in the lower don't-care part.

Note that the other bits in the original number must be `0`

. This can be achieved by masking them with an AND operation.

## Solution 4 - C

*(I'd never seen it before. This trick is great!)*

I'll expand a bit on Floris's assertion that when extracting `n`

bits you need `n-1`

space between any non-consecutive bits:

My initial thought (we'll see in a minute how this doesn't quite work) was that you could do better: If you want to extract `n`

bits, you'll have a collision when extracting/shifting bit `i`

if you have anyone (non-consecutive with bit `i`

) in the `i-1`

bits preceding or `n-i`

bits subsequent.

I'll give a few examples to illustrate:

`...a..b...c...`

Works (nobody in the 2 bits after `a`

, the bit before and the bit after `b`

, and nobody is in the 2 bits before `c`

):

```
a00b000c
+ 0b000c00
+ 00c00000
= abc.....
```

`...a.b....c...`

Fails because `b`

is in the 2 bits after `a`

(and gets pulled into someone else's spot when we shift `a`

):

```
a0b0000c
+ 0b0000c0
+ 00c00000
= abX.....
```

`...a...b.c...`

Fails because `b`

is in the 2 bits preceding `c`

(and gets pushed into someone else's spot when we shift `c`

):

```
a000b0c0
+ 0b0c0000
+ b0c00000
= Xbc.....
```

`...a...bc...d...`

Works because consecutive bits shift together:

```
a000bc000d
+ 0bc000d000
+ 000d000000
= abcd000000
```

**But we have a problem.** If we use `n-i`

instead of `n-1`

we could have the following scenario: what if we have a collision outside of the part that we care about, something we would mask away at the end, but whose carry bits end up interfering in the important un-masked range? (and note: the `n-1`

requirement makes sure this doesn't happen by making sure the `i-1`

bits after our un-masked range are clear when we shift the the `i`

th bit)

`...a...b..c...d...`

Potential failure on carry-bits, `c`

is in `n-1`

after `b`

, but satisfies `n-i`

criteria:

```
a000b00c000d
+ 0b00c000d000
+ 00c000d00000
+ 000d00000000
= abcdX.......
```

So why don't we just go back to that "`n-1`

bits of space" requirement?
**Because we can do better**:

`...a....b..c...d..`

*Fails* the "`n-1`

bits of space" test, but *works* for our bit-extracting trick:

```
+ a0000b00c000d00
+ 0b00c000d000000
+ 00c000d00000000
+ 000d00000000000
= abcd...0X......
```

I can't come up with a good way to characterize these fields that *don't* have `n-1`

space between important bits, but still would work for our operation. However, since **we know ahead of time** which bits we're interested in we can check our filter to make sure we don't experience carry-bit collisions:

Compare `(-1 AND mask) * shift`

against the expected all-ones result, `-1 << (64-n)`

(for 64-bit unsigned)

The magic shift/multiply to extract our bits works if and only if the two are equal.

## Solution 5 - C

In addition to the already excellent answers to this very interesting question, it might be useful to know that this bitwise multiplication trick has been known in the computer chess community since 2007, where it goes under the name of **Magic BitBoards**.

Many computer chess engines use several 64-bit integers (called bitboards) to represent the various piece sets (1 bit per occupied square). Suppose a sliding piece (rook, bishop, queen) on a certain origin square can move to at most `K`

squares if no blocking pieces were present. Using bitwise-and of those scattered `K`

bits with the bitboard of occupied squares gives a specific `K`

-bit word embedded within a 64-bit integer.

Magic multiplication can be used to map these scattered `K`

bits to the lower `K`

bits of a 64-bit integer. These lower `K`

bits can then be used to index a table of pre-computed bitboards that representst the allowed squares that the piece on its origin square can actually move to (taking care of blocking pieces etc.)

A typical chess engine using this approach has 2 tables (one for rooks, one for bishops, queens using the combination of both) of 64 entries (one per origin square) that contain such pre-computed results. Both the highest rated closed source (**Houdini**) and open source chess engine (**Stockfish**) currently use this approach for its very high performance.

Finding these magic multipliers is done either using an **exhaustive search** (optimized with early cutoffs) or with **trial and erorr** (e.g. trying lots of random 64-bit integers). There have been no bit patterns used during move generation for which no magic constant could be found. However, bitwise carry effects are typically necessary when the to-be-mapped bits have (almost) adjacent indices.

AFAIK, the very general SAT-solver approachy by @Syzygy has not been used in computer chess, and neither does there appear to be any formal theory regarding existence and uniqueness of such magic constants.