Should we compare floating point numbers for equality against a *relative* error?

So far I've seen many posts dealing with equality of floating point numbers. The standard answer to a question like "how should we decide if x and y are equal?" is

abs(x - y) < epsilon

where epsilon is a fixed, small constant. This is because the "operands" x and y are often the results of some computation where a rounding error is involved, hence the standard equality operator == is not what we mean, and what we should really ask is whether x and y are close, not equal.

Now, I feel that if x is "almost equal" to y, then also x*10^20 should be "almost equal" to y*10^20, in the sense that the relative error should be the same (but "relative" to what?). But with these big numbers, the above test would fail, i.e. that solution does not "scale".

How would you deal with this issue? Should we rescale the numbers or rescale epsilon? How? (Or is my intuition wrong?)

Here is a related question, but I don't like its accepted answer, for the reinterpret_cast thing seems a bit tricky to me, I don't understand what's going on. Please try to provide a simple test.


It all depends on the specific problem domain. Yes, using relative error will be more correct in the general case, but it can be significantly less efficient since it involves an extra floating-point division. If you know the approximate scale of the numbers in your problem, using an absolute error is acceptable.

This page outlines a number of techniques for comparing floats. It also goes over a number of important issues, such as those with subnormals, infinities, and NaNs. It's a great read, I highly recommend reading it all the way through.


As an alternative solution, why not just round or truncate the numbers and then make a straight comparison? By setting the number of significant digits in advance, you can be certain of the accuracy within that bound.