How to count the number of set bits in a 32-bit integer?
This is known as the 'Hamming Weight', 'popcount' or 'sideways addition'.
Some CPUs have a single built-in instruction to do it and others have parallel instructions which act on bit vectors. Instructions like x86's popcnt
(on CPUs where it's supported) will almost certainly be fastest for a single integer. Some other architectures may have a slow instruction implemented with a microcoded loop that tests a bit per cycle (citation needed - hardware popcount is normally fast if it exists at all.).
The 'best' algorithm really depends on which CPU you are on and what your usage pattern is.
Your compiler may know how to do something that's good for the specific CPU you're compiling for, e.g. C++20 std::popcount()
, or C++ std::bitset<32>::count()
, as a portable way to access builtin / intrinsic functions (see another answer on this question). But your compiler's choice of fallback for target CPUs that don't have hardware popcnt might not be optimal for your use-case. Or your language (e.g. C) might not expose any portable function that could use a CPU-specific popcount when there is one.
Portable algorithms that don't need (or benefit from) any HW support
A pre-populated table lookup method can be very fast if your CPU has a large cache and you are doing lots of these operations in a tight loop. However it can suffer because of the expense of a 'cache miss', where the CPU has to fetch some of the table from main memory. (Look up each byte separately to keep the table small.) If you want popcount for a contiguous range of numbers, only the low byte is changing for groups of 256 numbers, making this very good.
If you know that your bytes will be mostly 0's or mostly 1's then there are efficient algorithms for these scenarios, e.g. clearing the lowest set with a bithack in a loop until it becomes zero.
I believe a very good general purpose algorithm is the following, known as 'parallel' or 'variable-precision SWAR algorithm'. I have expressed this in a C-like pseudo language, you may need to adjust it to work for a particular language (e.g. using uint32_t for C++ and >>> in Java):
GCC10 and clang 10.0 can recognize this pattern / idiom and compile it to a hardware popcnt or equivalent instruction when available, giving you the best of both worlds. (https://godbolt.org/z/qGdh1dvKK)
int numberOfSetBits(uint32_t i)
{
// Java: use int, and use >>> instead of >>. Or use Integer.bitCount()
// C or C++: use uint32_t
i = i - ((i >> 1) & 0x55555555); // add pairs of bits
i = (i & 0x33333333) + ((i >> 2) & 0x33333333); // quads
i = (i + (i >> 4)) & 0x0F0F0F0F; // groups of 8
return (i * 0x01010101) >> 24; // horizontal sum of bytes
}
For JavaScript: coerce to integer with |0
for performance: change the first line to i = (i|0) - ((i >> 1) & 0x55555555);
This has the best worst-case behaviour of any of the algorithms discussed, so will efficiently deal with any usage pattern or values you throw at it. (Its performance is not data-dependent on normal CPUs where all integer operations including multiply are constant-time. It doesn't get any faster with "simple" inputs, but it's still pretty decent.)
References:
- https://graphics.stanford.edu/~seander/bithacks.html
- https://en.wikipedia.org/wiki/Hamming_weight
- http://gurmeet.net/puzzles/fast-bit-counting-routines/
- http://aggregate.ee.engr.uky.edu/MAGIC/#Population%20Count%20(Ones%20Count)
How this SWAR bithack works:
i = i - ((i >> 1) & 0x55555555);
The first step is an optimized version of masking to isolate the odd / even bits, shifting to line them up, and adding. This effectively does 16 separate additions in 2-bit accumulators (SWAR = SIMD Within A Register). Like (i & 0x55555555) + ((i>>1) & 0x55555555)
.
The next step takes the odd/even eight of those 16x 2-bit accumulators and adds again, producing 8x 4-bit sums. The i - ...
optimization isn't possible this time so it does just mask before / after shifting. Using the same 0x33...
constant both times instead of 0xccc...
before shifting is a good thing when compiling for ISAs that need to construct 32-bit constants in registers separately.
The final shift-and-add step of (i + (i >> 4)) & 0x0F0F0F0F
widens to 4x 8-bit accumulators. It masks after adding instead of before, because the maximum value in any 4-bit accumulator is 4
, if all 4 bits of the corresponding input bits were set. 4+4 = 8 which still fits in 4 bits, so carry between nibble elements is impossible in i + (i >> 4)
.
So far this is just fairly normal SIMD using SWAR techniques with a few clever optimizations. Continuing on with the same pattern for 2 more steps can widen to 2x 16-bit then 1x 32-bit counts. But there is a more efficient way on machines with fast hardware multiply:
Once we have few enough "elements", a multiply with a magic constant can sum all the elements into the top element. In this case byte elements. Multiply is done by left-shifting and adding, so a multiply of x * 0x01010101
results in x + (x<<8) + (x<<16) + (x<<24)
. Our 8-bit elements are wide enough (and holding small enough counts) that this doesn't produce carry into that top 8 bits.
A 64-bit version of this can do 8x 8-bit elements in a 64-bit integer with a 0x0101010101010101 multiplier, and extract the high byte with >>56
. So it doesn't take any extra steps, just wider constants. This is what GCC uses for __builtin_popcountll
on x86 systems when the hardware popcnt
instruction isn't enabled. If you can use builtins or intrinsics for this, do so to give the compiler a chance to do target-specific optimizations.
With full SIMD for wider vectors (e.g. counting a whole array)
This bitwise-SWAR algorithm could parallelize to be done in multiple vector elements at once, instead of in a single integer register, for a speedup on CPUs with SIMD but no usable popcount instruction. (e.g. x86-64 code that has to run on any CPU, not just Nehalem or later.)
However, the best way to use vector instructions for popcount is usually by using a variable-shuffle to do a table-lookup for 4 bits at a time of each byte in parallel. (The 4 bits index a 16 entry table held in a vector register).
On Intel CPUs, the hardware 64bit popcnt instruction can outperform an SSSE3 PSHUFB
bit-parallel implementation by about a factor of 2, but only if your compiler gets it just right. Otherwise SSE can come out significantly ahead. Newer compiler versions are aware of the popcnt false dependency problem on Intel.
- https://github.com/WojciechMula/sse-popcount state-of-the-art x86 SIMD popcount for SSSE3, AVX2, AVX512BW, AVX512VBMI, or AVX512 VPOPCNT. Using Harley-Seal across vectors to defer popcount within an element. (Also ARM NEON)
- Counting 1 bits (population count) on large data using AVX-512 or AVX-2
- related: https://github.com/mklarqvist/positional-popcount - separate counts for each bit-position of multiple 8, 16, 32, or 64-bit integers. (Again, x86 SIMD including AVX-512 which is really good at this, with
vpternlogd
making Harley-Seal very good.)
Some languages portably expose the operation in a way that can use efficient hardware support if available, otherwise some library fallback that's hopefully decent.
For example (from a table by language):
- C++ has
std::bitset<>::count()
, or C++20std::popcount(T x)
- Java has
java.lang.Integer.bitCount()
(also for Long or BigInteger) - C# has
System.Numerics.BitOperations.PopCount()
- Python has
int.bit_count()
(since 3.10)
Not all compilers / libraries actually manage to use HW support when it's available, though. (Notably MSVC, even with options that make std::popcount inline as x86 popcnt, its std::bitset::count still always uses a lookup table. This will hopefully change in future versions.)
Also consider the built-in functions of your compiler when the portable language doesn't have this basic bit operation. In GNU C for example:
int __builtin_popcount (unsigned int x);
int __builtin_popcountll (unsigned long long x);
In the worst case (no single-instruction HW support) the compiler will generate a call to a function (which in current GCC uses a shift/and bit-hack like this answer, at least for x86). In the best case the compiler will emit a cpu instruction to do the job. (Just like a *
or /
operator - GCC will use a hardware multiply or divide instruction if available, otherwise will call a libgcc helper function.) Or even better, if the operand is a compile-time constant after inlining, it can do constant-propagation to get a compile-time-constant popcount result.
The GCC builtins even work across multiple platforms. Popcount has almost become mainstream in the x86 architecture, so it makes sense to start using the builtin now so you can recompile to let it inline a hardware instruction when you compile with -mpopcnt
or something that includes that (e.g. https://godbolt.org/z/Ma5e5a). Other architectures have had popcount for years, but in the x86 world there are still some ancient Core 2 and similar vintage AMD CPUs in use.
On x86, you can tell the compiler that it can assume support for popcnt
instruction with -mpopcnt
(also implied by -msse4.2
). See GCC x86 options. -march=nehalem -mtune=skylake
(or -march=
whatever CPU you want your code to assume and to tune for) could be a good choice. Running the resulting binary on an older CPU will result in an illegal-instruction fault.
To make binaries optimized for the machine you build them on, use -march=native
(with gcc, clang, or ICC).
MSVC provides an intrinsic for the x86 popcnt
instruction, but unlike gcc it's really an intrinsic for the hardware instruction and requires hardware support.
Using std::bitset<>::count()
instead of a built-in
In theory, any compiler that knows how to popcount efficiently for the target CPU should expose that functionality through ISO C++ std::bitset<>
. In practice, you might be better off with the bit-hack AND/shift/ADD in some cases for some target CPUs.
For target architectures where hardware popcount is an optional extension (like x86), not all compilers have a std::bitset
that takes advantage of it when available. For example, MSVC has no way to enable popcnt
support at compile time, and it's std::bitset<>::count
always uses a table lookup, even with /Ox /arch:AVX
(which implies SSE4.2, which in turn implies the popcnt feature.) (Update: see below; that does get MSVC's C++20 std::popcount
to use x86 popcnt
, but still not its bitset<>::count. MSVC could fix that by updating their standard library headers to use std::popcount when available.)
But at least you get something portable that works everywhere, and with gcc/clang with the right target options, you get hardware popcount for architectures that support it.
#include <bitset>
#include <limits>
#include <type_traits>
template<typename T>
//static inline // static if you want to compile with -mpopcnt in one compilation unit but not others
typename std::enable_if<std::is_integral<T>::value, unsigned >::type
popcount(T x)
{
static_assert(std::numeric_limits<T>::radix == 2, "non-binary type");
// sizeof(x)*CHAR_BIT
constexpr int bitwidth = std::numeric_limits<T>::digits + std::numeric_limits<T>::is_signed;
// std::bitset constructor was only unsigned long before C++11. Beware if porting to C++03
static_assert(bitwidth <= std::numeric_limits<unsigned long long>::digits, "arg too wide for std::bitset() constructor");
typedef typename std::make_unsigned<T>::type UT; // probably not needed, bitset width chops after sign-extension
std::bitset<bitwidth> bs( static_cast<UT>(x) );
return bs.count();
}
See asm from gcc, clang, icc, and MSVC on the Godbolt compiler explorer.
x86-64 gcc -O3 -std=gnu++11 -mpopcnt
emits this:
unsigned test_short(short a) { return popcount(a); }
movzx eax, di # note zero-extension, not sign-extension
popcnt rax, rax
ret
unsigned test_int(int a) { return popcount(a); }
mov eax, edi
popcnt rax, rax # unnecessary 64-bit operand size
ret
unsigned test_u64(unsigned long long a) { return popcount(a); }
xor eax, eax # gcc avoids false dependencies for Intel CPUs
popcnt rax, rdi
ret
PowerPC64 gcc -O3 -std=gnu++11
emits (for the int
arg version):
rldicl 3,3,0,32 # zero-extend from 32 to 64-bit
popcntd 3,3 # popcount
blr
This source isn't x86-specific or GNU-specific at all, but only compiles well with gcc/clang/icc, at least when targeting x86 (including x86-64).
Also note that gcc's fallback for architectures without single-instruction popcount is a byte-at-a-time table lookup. This isn't wonderful for ARM, for example.
C++20 has std::popcount(T)
Current libstdc++ headers unfortunately define it with a special case if(x==0) return 0;
at the start, which clang doesn't optimize away when compiling for x86:
#include <bit>
int bar(unsigned x) {
return std::popcount(x);
}
clang 11.0.1 -O3 -std=gnu++20 -march=nehalem
(https://godbolt.org/z/arMe5a)
# clang 11
bar(unsigned int): # @bar(unsigned int)
popcnt eax, edi
cmove eax, edi # redundant: if popcnt result is 0, return the original 0 instead of the popcnt-generated 0...
ret
But GCC compiles nicely:
# gcc 10
xor eax, eax # break false dependency on Intel SnB-family before Ice Lake.
popcnt eax, edi
ret
Even MSVC does well with it, as long as you use -arch:AVX
or later (and enable C++20 with -std:c++latest
). https://godbolt.org/z/7K4Gef
int bar(unsigned int) PROC ; bar, COMDAT
popcnt eax, ecx
ret 0
int bar(unsigned int) ENDP ; bar
In my opinion, the "best" solution is the one that can be read by another programmer (or the original programmer two years later) without copious comments. You may well want the fastest or cleverest solution which some have already provided but I prefer readability over cleverness any time.
unsigned int bitCount (unsigned int value) {
unsigned int count = 0;
while (value > 0) { // until all bits are zero
if ((value & 1) == 1) // check lower bit
count++;
value >>= 1; // shift bits, removing lower bit
}
return count;
}
If you want more speed (and assuming you document it well to help out your successors), you could use a table lookup:
// Lookup table for fast calculation of bits set in 8-bit unsigned char.
static unsigned char oneBitsInUChar[] = {
// 0 1 2 3 4 5 6 7 8 9 A B C D E F (<- n)
// =====================================================
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, // 0n
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, // 1n
: : :
4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8, // Fn
};
// Function for fast calculation of bits set in 16-bit unsigned short.
unsigned char oneBitsInUShort (unsigned short x) {
return oneBitsInUChar [x >> 8]
+ oneBitsInUChar [x & 0xff];
}
// Function for fast calculation of bits set in 32-bit unsigned int.
unsigned char oneBitsInUInt (unsigned int x) {
return oneBitsInUShort (x >> 16)
+ oneBitsInUShort (x & 0xffff);
}
Although these rely on specific data type sizes so they're not that portable. But, since many performance optimisations aren't portable anyway, that may not be an issue. If you want portability, I'd stick to the readable solution.
From Hacker's Delight, p. 66, Figure 5-2
int pop(unsigned x)
{
x = x - ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0F0F0F0F;
x = x + (x >> 8);
x = x + (x >> 16);
return x & 0x0000003F;
}
Executes in ~20-ish instructions (arch dependent), no branching.
Hacker's Delight is delightful! Highly recommended.
I think the fastest way—without using lookup tables and popcount—is the following. It counts the set bits with just 12 operations.
int popcount(int v) {
v = v - ((v >> 1) & 0x55555555); // put count of each 2 bits into those 2 bits
v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // put count of each 4 bits into those 4 bits
return c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24;
}
It works because you can count the total number of set bits by dividing in two halves, counting the number of set bits in both halves and then adding them up. Also know as Divide and Conquer
paradigm. Let's get into detail..
v = v - ((v >> 1) & 0x55555555);
The number of bits in two bits can be 0b00
, 0b01
or 0b10
. Lets try to work this out on 2 bits..
---------------------------------------------
| v | (v >> 1) & 0b0101 | v - x |
---------------------------------------------
0b00 0b00 0b00
0b01 0b00 0b01
0b10 0b01 0b01
0b11 0b01 0b10
This is what was required: the last column shows the count of set bits in every two bit pair. If the two bit number is >= 2 (0b10)
then and
produces 0b01
, else it produces 0b00
.
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
This statement should be easy to understand. After the first operation we have the count of set bits in every two bits, now we sum up that count in every 4 bits.
v & 0b00110011 //masks out even two bits
(v >> 2) & 0b00110011 // masks out odd two bits
We then sum up the above result, giving us the total count of set bits in 4 bits. The last statement is the most tricky.
c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24;
Let's break it down further...
v + (v >> 4)
It's similar to the second statement; we are counting the set bits in groups of 4 instead. We know—because of our previous operations—that every nibble has the count of set bits in it. Let's look an example. Suppose we have the byte 0b01000010
. It means the first nibble has its 4bits set and the second one has its 2bits set. Now we add those nibbles together.
0b01000010 + 0b01000000
It gives us the count of set bits in a byte, in the first nibble 0b01100010
and therefore we mask the last four bytes of all the bytes in the number (discarding them).
0b01100010 & 0xF0 = 0b01100000
Now every byte has the count of set bits in it. We need to add them up all together. The trick is to multiply the result by 0b10101010
which has an interesting property. If our number has four bytes, A B C D
, it will result in a new number with these bytes A+B+C+D B+C+D C+D D
. A 4 byte number can have maximum of 32 bits set, which can be represented as 0b00100000
.
All we need now is the first byte which has the sum of all set bits in all the bytes, and we get it by >> 24
. This algorithm was designed for 32 bit
words but can be easily modified for 64 bit
words.