C++ floating point precision [duplicate]

Possible Duplicate:
Floating point inaccuracy examples

double a = 0.3;
std::cout.precision(20);
std::cout << a << std::endl;

result: 0.2999999999999999889

double a, b;
a = 0.3;
b = 0;
for (char i = 1; i <= 50; i++) {
  b = b + a;
};
std::cout.precision(20);
std::cout << b << std::endl;

result: 15.000000000000014211

So.. 'a' is smaller than it should be. But if we take 'a' 50 times - result will be bigger than it should be.

Why is this? And how to get correct result in this case?


To get the correct results, don't set precision greater than available for this numeric type:

#include <iostream>
#include <limits>
int main()
{
        double a = 0.3;
        std::cout.precision(std::numeric_limits<double>::digits10);
        std::cout << a << std::endl;
        double b = 0;
        for (char i = 1; i <= 50; i++) {
                  b = b + a;
        };
        std::cout.precision(std::numeric_limits<double>::digits10);
        std::cout << b << std::endl;
}

Although if that loop runs for 5000 iterations instead of 50, the accumulated error will show up even with this approach -- it's just how floating-point numbers work.


Why is this?

Because floating-point numbers are stored in binary, in which 0.3 is 0.01001100110011001... repeating just like 1/3 is 0.333333... is repeating in decimal. When you write 0.3, you actually get 0.299999999999999988897769753748434595763683319091796875 (the infinite binary representation rounded to 53 significant digits).

Keep in mind that for the applications for which floating-point is designed, it's not a problem that you can't represent 0.3 exactly. Floating-point was designed to be used with:

  • Physical measurements, which are often measured to only 4 sig figs and never to more than 15.
  • Transcendental functions like logarithms and the trig functions, which are only approximated anyway.

For which binary-decimal conversions are pretty much irrelevant compared to other sources of error.

Now, if you're writing financial software, for which $0.30 means exactly $0.30, it's different. There are decimal arithmetic classes designed for this situation.

And how to get correct result in this case?

Limiting the precision to 15 significant digits is usually enough to hide the "noise" digits. Unless you actually need an exact answer, this is usually the best approach.