How does clearing least significant bit repeatedly work in this algorithm?

Solution 1:

c - 1 unsets the least significant bit in the binary representation of c and sets all the unset bits to the right of that bit.

When you binary and c - 1 with c you effectively unset all the bits to the right of the least significant set bit and also the least significant set bit. In other words the least significant set bit in c and everything to its right become zeros.

You count this as one, and rightly so. Because it was just one set bit fomr a ^ b.

Now, you continue this operation until c becomes zero and the number of operations is the number of set bits in c which is the number of different bits between a and b.

To give you an example for what c - 1 does to the binary representation of c:

c = 6, in binary 110
c-1 = 5, in binary 101
(c-1) & (c) = 110 & 101 = 100
The above is one iteration of the loop

Next iteration
c = 4, in binary 100
c-1 = 3, in binary 011
(c-1) & (c) = 100 & 101 = 0

The above successfully counts the number of set bits in 6.

This optimization helps you to improve the algorithm compared to when you right shift the number at each iteration and check whether the least significant bit is set. In the former case you operate in the number of positions where the most significant set bit is. Say for a power of 2 number, 2^7, you iterate 8 times until the number becomes zero. While with the optimized method you iterate according to the number of set bits. For 2^7 it would be just one iteration.

Solution 2:

From the structure of the code, you can guess that c & (c-1) has one 1 less than c, even without investigating the expression

Indeed, subtracting 1 flips all 0 bits from the right (there are borrows) until the rightmost 1 inclusive. So if you bitwise and with c, only that rightmost 1 disappears.

E.g. c = 1001010100 -> c-1 = 1001010011 -> c & (c-1) = 1001010000.

Next, c = 1001010000 -> c-1 = 1001001111 -> c & (c-1) = 1001000000.