How to implement Bitcount using only Bitwise operators?
The task is to implement a bit count logic using only bitwise operators. I got it working fine, but am wondering if someone can suggest a more elegant approach.
Only Bitwise ops are allowed. No "if", "for" etc
int x = 4;
printf("%d\n", x & 0x1);
printf("%d\n", (x >> 1) & 0x1);
printf("%d\n", (x >> 2) & 0x1);
printf("%d\n", (x >> 3) & 0x1);
Thank you.
Solution 1:
From http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
unsigned int v; // count bits set in this (32-bit value)
unsigned int c; // store the total here
c = v - ((v >> 1) & 0x55555555);
c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
c = ((c >> 4) + c) & 0x0F0F0F0F;
c = ((c >> 8) + c) & 0x00FF00FF;
c = ((c >> 16) + c) & 0x0000FFFF;
Edit: Admittedly it's a bit optimized which makes it harder to read. It's easier to read as:
c = (v & 0x55555555) + ((v >> 1) & 0x55555555);
c = (c & 0x33333333) + ((c >> 2) & 0x33333333);
c = (c & 0x0F0F0F0F) + ((c >> 4) & 0x0F0F0F0F);
c = (c & 0x00FF00FF) + ((c >> 8) & 0x00FF00FF);
c = (c & 0x0000FFFF) + ((c >> 16)& 0x0000FFFF);
Each step of those five, adds neighbouring bits together in groups of 1, then 2, then 4 etc. The method is based in divide and conquer.
In the first step we add together bits 0 and 1 and put the result in the two bit segment 0-1, add bits 2 and 3 and put the result in the two-bit segment 2-3 etc...
In the second step we add the two-bits 0-1 and 2-3 together and put the result in four-bit 0-3, add together two-bits 4-5 and 6-7 and put the result in four-bit 4-7 etc...
Example:
So if I have number 395 in binary 0000000110001011 (0 0 0 0 0 0 0 1 1 0 0 0 1 0 1 1)
After the first step I have: 0000000101000110 (0+0 0+0 0+0 0+1 1+0 0+0 1+0 1+1) = 00 00 00 01 01 00 01 10
In the second step I have: 0000000100010011 ( 00+00 00+01 01+00 01+10 ) = 0000 0001 0001 0011
In the fourth step I have: 0000000100000100 ( 0000+0001 0001+0011 ) = 00000001 00000100
In the last step I have: 0000000000000101 ( 00000001+00000100 )
which is equal to 5, which is the correct result
Solution 2:
I would use a pre-computed array
uint8_t set_bits_in_byte_table[ 256 ];
The i
-th entry in this table stores the number of set bits in byte i
, e.g. set_bits_in_byte_table[ 100 ] = 3
since there are 3 1
bits in binary representation of decimal 100 (=0x64 = 0110-0100).
Then I would try
size_t count_set_bits( uint32_t const x ) {
size_t count = 0;
uint8_t const * byte_ptr = (uint8_t const *) &x;
count += set_bits_in_byte_table[ *byte_ptr++ ];
count += set_bits_in_byte_table[ *byte_ptr++ ];
count += set_bits_in_byte_table[ *byte_ptr++ ];
count += set_bits_in_byte_table[ *byte_ptr++ ];
return count;
}
Solution 3:
Here's a simple illustration to the answer:
a b c d 0 a b c 0 b 0 d
& & +
0 1 0 1 0 1 0 1 0 a 0 c
------- ------- -------
0 b 0 d 0 a 0 c a+b c+d
So we have exactly 2 bits to store a + b and 2 bits to store c + d. a = 0, 1 etc., so 2 bits is what we need to store their sum. On the next step we'll have 4 bits to store sum of 2-bit values etc.