Extracting bits with a single multiplication
I saw an interesting technique used in an answer to another question, and would like to understand it a little better.
We're given an unsigned 64-bit integer, and we are interested in the following bits:
1.......2.......3.......4.......5.......6.......7.......8.......
Specifically, we'd like to move them to the top eight positions, like so:
12345678........................................................
We don't care about the value of the bits indicated by .
, and they don't have to be preserved.
The solution was to mask out the unwanted bits, and multiply the result by 0x2040810204081
. This, as it turns out, does the trick.
How general is this method? Can this technique be used to extract any subset of bits? If not, how does one figure out whether or not the method works for a particular set of bits?
Finally, how would one go about finding the (a?) correct multiplier to extract the given bits?
Very interesting question, and clever trick.
Let's look at a simple example of getting a single byte manipulated. Using unsigned 8 bit for simplicity. Imagine your number is xxaxxbxx
and you want ab000000
.
The solution consisted of two steps: a bit masking, followed by multiplication. The bit mask is a simple AND operation that turns uninteresting bits to zeros. In the above case, your mask would be 00100100
and the result 00a00b00
.
Now the hard part: turning that into ab......
.
A multiplication is a bunch of shift-and-add operations. The key is to allow overflow to "shift away" the bits we don't need and put the ones we want in the right place.
Multiplication by 4 (00000100
) would shift everything left by 2 and get you to a00b0000
. To get the b
to move up we need to multiply by 1 (to keep the a in the right place) + 4 (to move the b up). This sum is 5, and combined with the earlier 4 we get a magic number of 20, or 00010100
. The original was 00a00b00
after masking; the multiplication gives:
000000a00b000000
00000000a00b0000 +
----------------
000000a0ab0b0000
xxxxxxxxab......
From this approach you can extend to larger numbers and more bits.
One of the questions you asked was "can this be done with any number of bits?" I think the answer is "no", unless you allow several masking operations, or several multiplications. The problem is the issue of "collisions" - for example, the "stray b" in the problem above. Imagine we need to do this to a number like xaxxbxxcx
. Following the earlier approach, you would think we need {x 2, x {1 + 4 + 16}} = x 42 (oooh - the answer to everything!). Result:
00000000a00b00c00
000000a00b00c0000
0000a00b00c000000
-----------------
0000a0ababcbc0c00
xxxxxxxxabc......
As you can see, it still works, but "only just". They key here is that there is "enough space" between the bits we want that we can squeeze everything up. I could not add a fourth bit d right after c, because I would get instances where I get c+d, bits might carry, ...
So without formal proof, I would answer the more interesting parts of your question as follows: "No, this will not work for any number of bits. To extract N bits, you need (N-1) spaces between the bits you want to extract, or have additional mask-multiply steps."
The only exception I can think of for the "must have (N-1) zeros between bits" rule is this: if you want to extract two bits that are adjacent to each other in the original, AND you want to keep them in the same order, then you can still do it. And for the purpose of the (N-1) rule they count as two bits.
There is another insight - inspired by the answer of @Ternary below (see my comment there). For each interesting bit, you only need as many zeros to the right of it as you need space for bits that need to go there. But also, it needs as many bits to the left as it has result-bits to the left. So if a bit b ends up in position m of n, then it needs to have m-1 zeros to its left, and n-m zeros to its right. Especially when the bits are not in the same order in the original number as they will be after the re-ordering, this is an important improvement to the original criteria. This means, for example, that a 16 bit word
a...e.b...d..c..
Can be shifted into
abcde...........
even though there is only one space between e and b, two between d and c, three between the others. Whatever happened to N-1?? In this case, a...e
becomes "one block" - they are multiplied by 1 to end up in the right place, and so "we got e for free". The same is true for b and d (b needs three spaces to the right, d needs the same three to its left). So when we compute the magic number, we find there are duplicates:
a: << 0 ( x 1 )
b: << 5 ( x 32 )
c: << 11 ( x 2048 )
d: << 5 ( x 32 ) !! duplicate
e: << 0 ( x 1 ) !! duplicate
Clearly, if you wanted these numbers in a different order, you would have to space them further. We can reformulate the (N-1)
rule: "It will always work if there are at least (N-1) spaces between bits; or, if the order of bits in the final result is known, then if a bit b ends up in position m of n, it needs to have m-1 zeros to its left, and n-m zeros to its right."
@Ternary pointed out that this rule doesn't quite work, as there can be a carry from bits adding "just to the right of the target area" - namely, when the bits we're looking for are all ones. Continuing the example I gave above with the five tightly packed bits in a 16 bit word: if we start with
a...e.b...d..c..
For simplicity, I will name the bit positions ABCDEFGHIJKLMNOP
The math we were going to do was
ABCDEFGHIJKLMNOP
a000e0b000d00c00
0b000d00c0000000
000d00c000000000
00c0000000000000 +
----------------
abcded(b+c)0c0d00c00
Until now, we thought anything below abcde
(positions ABCDE
) would not matter, but in fact, as @Ternary pointed out, if b=1, c=1, d=1
then (b+c)
in position G
will cause a bit to carry to position F
, which means that (d+1)
in position F
will carry a bit into E
- and our result is spoilt. Note that space to the right of the least significant bit of interest (c
in this example) doesn't matter, since the multiplication will cause padding with zeros from beyone the least significant bit.
So we need to modify our (m-1)/(n-m) rule. If there is more than one bit that has "exactly (n-m) unused bits to the right (not counting the last bit in the pattern - "c" in the example above), then we need to strengthen the rule - and we have to do so iteratively!
We have to look not only at the number of bits that meet the (n-m) criterion, but also the ones that are at (n-m+1), etc. Let's call their number Q0 (exactly n-m
to next bit), Q1 (n-m+1), up to Q(N-1) (n-1). Then we risk carry if
Q0 > 1
Q0 == 1 && Q1 >= 2
Q0 == 0 && Q1 >= 4
Q0 == 1 && Q1 > 1 && Q2 >=2
...
If you look at this, you can see that if you write a simple mathematical expression
W = N * Q0 + (N - 1) * Q1 + ... + Q(N-1)
and the result is W > 2 * N
, then you need to increase the RHS criterion by one bit to (n-m+1)
. At this point, the operation is safe as long as W < 4
; if that doesn't work, increase the criterion one more, etc.
I think that following the above will get you a long way to your answer...
Very interesting question indeed. I'm chiming in with my two cents, which is that, if you can manage to state problems like this in terms of first-order logic over the bitvector theory, then theorem provers are your friend, and can potentially provide you with very fast answers to your questions. Let's re-state the problem being asked as a theorem:
"There exists some 64-bit constants 'mask' and 'multiplicand' such that, for all 64-bit bitvectors x, in the expression y = (x & mask) * multiplicand, we have that y.63 == x.63, y.62 == x.55, y.61 == x.47, etc."
If this sentence is in fact a theorem, then it is true that some values of the constants 'mask' and 'multiplicand' satisfy this property. So let's phrase this in terms of something that a theorem prover can understand, namely SMT-LIB 2 input:
(set-logic BV)
(declare-const mask (_ BitVec 64))
(declare-const multiplicand (_ BitVec 64))
(assert
(forall ((x (_ BitVec 64)))
(let ((y (bvmul (bvand mask x) multiplicand)))
(and
(= ((_ extract 63 63) x) ((_ extract 63 63) y))
(= ((_ extract 55 55) x) ((_ extract 62 62) y))
(= ((_ extract 47 47) x) ((_ extract 61 61) y))
(= ((_ extract 39 39) x) ((_ extract 60 60) y))
(= ((_ extract 31 31) x) ((_ extract 59 59) y))
(= ((_ extract 23 23) x) ((_ extract 58 58) y))
(= ((_ extract 15 15) x) ((_ extract 57 57) y))
(= ((_ extract 7 7) x) ((_ extract 56 56) y))
)
)
)
)
(check-sat)
(get-model)
And now let's ask the theorem prover Z3 whether this is a theorem:
z3.exe /m /smt2 ExtractBitsThroughAndWithMultiplication.smt2
The result is:
sat
(model
(define-fun mask () (_ BitVec 64)
#x8080808080808080)
(define-fun multiplicand () (_ BitVec 64)
#x0002040810204081)
)
Bingo! It reproduces the result given in the original post in 0.06 seconds.
Looking at this from a more general perspective, we can view this as being an instance of a first-order program synthesis problem, which is a nascent area of research about which few papers have been published. A search for "program synthesis" filetype:pdf
should get you started.
Every 1-bit in the multiplier is used to copy one of the bits into its correct position:
-
1
is already in the correct position, so multiply by0x0000000000000001
. -
2
must be shifted 7 bit positions to the left, so we multiply by0x0000000000000080
(bit 7 is set). -
3
must be shifted 14 bit positions to the left, so we multiply by0x0000000000000400
(bit 14 is set). - and so on until
-
8
must be shifted 49 bit positions to the left, so we multiply by0x0002000000000000
(bit 49 is set).
The multiplier is the sum of the multipliers for the individual bits.
This only works because the bits to be collected are not too close together, so that the multiplication of bits which do not belong together in our scheme either fall beyond the 64 bit or in the lower don't-care part.
Note that the other bits in the original number must be 0
. This can be achieved by masking them with an AND operation.