Why is the maximum value of an unsigned n-bit integer 2ⁿ-1 and not 2ⁿ?
The maximum value of an n
-bit integer is 2n-1. Why do we have the "minus 1"? Why isn't the maximum just 2n?
Solution 1:
The -1
is because integers start at 0, but our counting starts at 1.
So, 2^32-1
is the maximum value for a 32-bit unsigned integer (32 binary digits). 2^32
is the number of possible values.
To simplify why, look at decimal. 10^2-1
is the maximum value of a 2-digit decimal number (99). Because our intuitive human counting starts at 1, but integers are 0-based, 10^2
is the number of values (100).
Solution 2:
2^32
in binary:
1 00000000 00000000 00000000 00000000
2^32 - 1
in binary:
11111111 11111111 11111111 11111111
As you can see, 2^32
takes 33
bits, whereas 2^32 - 1
is the maximum value of a 32
bit integer.
The reason for the seemingly "off-by-one" error here is that the lowest bit represents a one, not a two. So the first bit is actually 2^0
, the second bit is 2^1
, etc...
Solution 3:
232 in binary is one followed by 32 zeroes, for a total of 33 bits. That doesn't fit in a 32-bit int value.