Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?

double r = 11.631;
double theta = 21.4;

In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.

How can I avoid this?


These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.

By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.


I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:

See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.

So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.


If you have a value like:

double theta = 21.4;

And you want to do:

if (theta == 21.4)
{
}

You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.

if (fabs(theta - 21.4) <= 1e-6)
{
}