Why does adding a small float to a large float just drop the small one?

Say I have:

float a = 3            // (gdb) p/f a   = 3
float b = 299792458    // (gdb) p/f b   = 299792448

then

float sum = a + b      // (gdb) p/f sum = 299792448

I think it has something to do with the mantissa shifting around. Can someone explain exactly what's going on? 32bit


Solution 1:

32-bit floats only have 24 bits of precision. Thus, a float cannot hold b exactly - it does the best job it can by setting some exponent and mantissa to get as close as possible1. (The nearest representable float to the constant in the source; the default FP rounding mode is "nearest".)

When you then consider the floating point representation of b and a, and try and add them, the addition operation will shift the small number a's mantissa downwards as it tries to match b's exponent, to the point where the value (3) falls off the end and you're left with 0. Hence, the addition operator ends up adding floating point zero to b. (This is an over-simplification; low bits can still affect rounding if there's partial overlap of mantissas.)

In general, the infinite-precision addition result has to get rounded to the nearest float with the current FP rounding mode, and that happened to be equal to b.

See also Why adding big to small in floating point introduce more error? for cases where the number changes some, but with large rounding error, for an example using decimal significant figures as a way to help understand binary float rounding.


Footnote 1: For numbers that large, the nearest two floats are 32 apart. Modern clang even warns about rounding of an int in the source to a float that represents a different value. Unless you write it as a float or double constant already (like 299792458.0f), in which case the rounding happens without warning.

That's why the smallest a value that will round sum up to 299792480.0f instead of down to 299792448.0f is about 16.000001 for that b value which rounded to 299792448.0f. Runnable example on the Godbolt compiler explorer.

The default FP rounding mode rounds to nearest with even mantissa as a tie-break. 16.0 goes exactly half-way, and thus round to a bit-pattern of 0x4d8ef3c2, not up to 0x4d8ef3c3. https://www.h-schmidt.net/FloatConverter/IEEE754.html. Anything slightly greater than 16 rounds up, because rounding cares about the infinite-precision result. It doesn't actually shift out bits before adding, that was an over-simplification. The nearest float to 16.000001 has only the low bit set in its mantissa, bit-pattern 0x41800001. It's actually about 1.0000001192092896 x 24, or 16.0000019... Much smaller and it would round to exactly 16.0f and would be <= 1 ULP (unit in the last place) of b, which wouldn't change b because b's mantissa is already even.


If you avoid early rounding by using double a,b, the smallest value you can add that rounds up 299792480.0f instead of down to 299792448.0f when you do float sum = a+b is about a=6.0000001;, which makes sense because the integer value ...58 stays as ...58.0 instead of rounding down to ...48.0f, i.e. the rounding error in float b = ...58 was -10, so a can be that much smaller.

There are two rounding steps this time, though, with a+b rounding to the nearest double if that addition isn't exact, then that double rounding to a float. (Or if FLT_EVAL_METHOD == 2, like C compiling for 80-bit x87 floating point on 32-bit x86, the + result would round to to 80-bit long double, then to float.)

Solution 2:

Floating-point number have limited precision. If you're using a float, you're only using 32 bits. However some of those bits are reserved for defining the exponent, so that you really only have 23 bits to use. The number you give is too large for those 23 bits, so the last few digits are ignored.

To make this a little more intuitive, suppose all of the bits except 2 were reserved for the exponent. Then we can represent 0, 1, 2, and 3 without trouble, but then we have to increment the exponent. Now we need to represent 4 through 16 with only 2 bits. So the numbers that can be represented will be somewhat spread out: 4 and 5 won't both be there. So, 4+1 = 4.