32 bit unsigned multiply on 64 bit causing undefined behavior?
Solution 1:
The simplest way to get the multiplication to happen in an unsigned type that is at least uint32_t
, and also at least unsigned int
, is to involve an expression of type unsigned int
.
v = 1U * s1 * s2;
This either converts 1U
to uint32_t
, or s1
and s2
to unsigned int
, depending on what's appropriate for your particular platform.
@Deduplicator comments that some compilers, where uint32_t
is narrower than unsigned int
, may warn about the implicit conversion in the assignment, and notes that such warnings are likely suppressable by making the conversion explicit:
v = (uint32_t) (1U * s1 * S2);
It looks a bit less elegant, in my opinion, though.
Solution 2:
Congratulations on finding a friction point.
A possible way:
v = (uint32_t) (UINT_MAX<=0xffffffff
? s1 * s2
: (unsigned)s1 * (unsigned)s2);
Anyway, looks like adding some typedefs to <stdint.h>
for types guaranteed to be no smaller than int
would be in order ;-).