An IEEE754 64-bit double can represent any 32-bit integer, simply because it has 53-odd(a) bits available for precision and the 32-bit integer only needs, well, 32 :-)

It would be plausible for a (non IEEE754 double precision) 64-bit floating point number to have less than 32 bits of precision. That would allow truly huge numbers (due to the exponent) but at the cost of precision.

The bottom line is that, provided there are more bits of precision in the mantissa of the floating point number than there are in the integer (and enough bits in the exponent to scale it), then it can be represented without loss of precision.


(a) Technically, the 53rd bit of precision is an implied 1 at the start of the sequence so the amount of "variablity" may only be 52 bits. Whether it's 52 or 53, it's still enough bits to represent every 32-bit integer.


Yes. A float (or double) is guaranteed to exactly represent any integer that does not need to be truncated. For a double, there is 53 bits of precision, so that is more than enough to exactly represent any 32 bit integer, and a tiny (statistically speaking) proportion of 64 bit ones too.


Exactly what the range is that you can represent exactly depends on a lot of factors in your implementation, but you can lower-bound it by saying that, if the exponent field is set to 0, you can exactly represent integers up to the width of your mantissa field (assuming a sign bit). For IEEE 754 double-precision, this means you can represent 52-bit numbers exactly. In general, your mantissa will be over half the width of the overall structure.