Precision of multiplication by 1.0 and int to float conversion

Solution 1:

No.

If i is sufficiently large that int(float(i)) != i (assuming float is IEEE-754 single precision, i = 0x1000001 suffices to exhibit this) then this is false, because multiplication by 1.0f forces a conversion to float, which changes the value even though the subsequent multiplication does not.

However, if i is a 32-bit integer and double is IEEE-754 double, then it is true that int(i*1.0) == i.


Just to be totally clear, multiplication by 1.0f is exact. It's the conversion from int to float that may not be.

Solution 2:

No, IEEE-754 floating point numbers have a greater dynamic range than integers at the cost of integer precision for the same bit width.

See for example the output of this little snippet:

int main() {
        int x = 43046721;

        float y = x;

        printf("%d\n", x);
        printf("%f\n", y);
}

43046721 cannot be represented correctly in the 24 bits of precision available in a 32-bit float number, so the output is something along these lines:

43046721
43046720.000000

In fact, I would expect any odd number above 16,777,216 to have the same issue when converting to a 32-bit float number.

A few points of interest:

  • This has more to do with the implicit int-to-float conversion than with the multiplication itself.

  • This is not by any mean unique to C - for example Java is also subject to the exact same issue.

  • Most compilers have optimization options that may affect how such conversions are handled, by ignoring certain restrictions of the standard. In such a case, (int)((float)x * 1.0f) == x might always be true if the compiler optimizes out the conversion to float and back.