What does gcc's ffast-math actually do?
Solution 1:
-ffast-math
does a lot more than just break strict IEEE compliance.
First of all, of course, it does break strict IEEE compliance, allowing e.g. the reordering of instructions to something which is mathematically the same (ideally) but not exactly the same in floating point.
Second, it disables setting errno
after single-instruction math functions, which means avoiding a write to a thread-local variable (this can make a 100% difference for those functions on some architectures).
Third, it makes the assumption that all math is finite, which means that no checks for NaN (or zero) are made in place where they would have detrimental effects. It is simply assumed that this isn't going to happen.
Fourth, it enables reciprocal approximations for division and reciprocal square root.
Further, it disables signed zero (code assumes signed zero does not exist, even if the target supports it) and rounding math, which enables among other things constant folding at compile-time.
Last, it generates code that assumes that no hardware interrupts can happen due to signalling/trapping math (that is, if these cannot be disabled on the target architecture and consequently do happen, they will not be handled).
Solution 2:
As you mentioned, it allows optimizations that do not preserve strict IEEE compliance.
An example is this:
x = x*x*x*x*x*x*x*x;
to
x *= x;
x *= x;
x *= x;
Because floating-point arithmetic is not associative, the ordering and factoring of the operations will affect results due to round-off. Therefore, this optimization is not done under strict FP behavior.
I haven't actually checked to see if GCC actually does this particular optimization. But the idea is the same.