Set all bytes of int to (unsigned char)0, guaranteed to represent zero?
Solution 1:
C++ 11
I think the pertinent part are
3.9.1/1 In C++11
For character types, all bits of the object representation participate in the value representation. For unsigned character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types.
Along with 3.9.1/7
The representations of integral types shall define values by use of a pure binary numeration system.
C11
6.2.6.2 is very explicit
For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2N−1, so that objects of that type shall be capable of representing values from 0 to 2N − 1 using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.
For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; signed char shall not have any padding bits. There shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M ≤ N). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways:
— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value −(2M) (two’s complement);
— the sign bit has the value −(2M − 1) (ones’ complement).
Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones’ complement), is a trap representation or a normal value. In the case of sign and magnitude and ones’ complement, if this representation is a normal value it is called a negative zero.
Summmary
I think the intend is the same for both standard.
char
,signed char
andunsigned char
have all bits participating in the valueother integer types may have padding bits which don't participate in the value. A wrong bit pattern in them may imply a not valid value.
the interpretation is a pure binary representation, something whose definition is expanded in the C11 citation above.
Two things which may be not clear:
can -0 (for sign and magnitude and _ones' complement) be a trap value in C++
can one of the padding bits be a parity bit (i.e. can we modify the representation if we ensure that the padding bits aren't modified or not)
I'd be conservative and assume yes for the both.
Solution 2:
Nope. For example, there's nothing in the Standard banning a bias-based representation, it only mandates that it is binary.