Understanding floating point problems

Could someone here please help me understand how to determine when floating point limitations will cause errors in your calculations. For example the following code.

CalculateTotalTax = function (TaxRate, TaxFreePrice) {
     return ((parseFloat(TaxFreePrice) / 100) * parseFloat(TaxRate)).toFixed(4);
};

I have been unable to input any two values that have caused for me an incorrect result for this method. If I remove the toFixed(4) I can infact see where the calculations start to lose accuracy (somewhere around the 6th decimal place). Having said that though, my understanding of floats is that even small numbers can sometimes fail to be represented or have I misunderstood and can 4 decimal places (for example) always be represented accurately.

MSDN explains floats as such...

This means they cannot hold an exact representation of any quantity that is not a binary fraction (of the form k / (2 ^ n) where k and n are integers)

Now I assume this applies to all floats (inlcuding those used in javascript).

Fundamentally my question boils down to this. How can one determine if any specific method will be vulnerable to errors in floating point operations, at what precision will those errors materialize and what inputs will be required to produce those errors?

Hopefully what I am asking makes sense.


Solution 1:

Start by reading What Every Computer Scientist Should Know About Floating Point: http://docs.sun.com/source/806-3568/ncg_goldberg.html

Short answer: double precision floats (which are the default in JavaScript) have about 16 decimal digits of precision. Rounding can vary from platform to platform. If it is absolutely essential that you get the consistently right answer, you should do rational arithmetic yourself (this doesn't need to be hard - for currency, maybe you can just multiply by 100 to store the number of cents as an integer).

But if it suffices to get the answer with a high degree of precision, floats should be good enough, especially double precision.

Solution 2:

There are two important thing you should now when dealing with floats:

1- You should be aware of machine epsilon. To know how much precision you have.

2- You should not assume if two values are equal in base 10, they are equal in base 2 in a machine with precision limit.

if ((6.0 / 10.0) / 3.0 != .2) {
        cout << "gotcha" << endl;
}

Number 2 may be convincing enough to make you avoid comparing floating point numbers for equality, instead a threshold and greater-than or less-than operators can be used for comparison

Solution 3:

The other answers have pointed to good resources to understanding this problem. If your actually using monetary values in your code (as in your example) you should prefer Decimal types (System.Decimal in .Net). These will avoid some of the rounding problems from using floats and better match the domain.

Solution 4:

No, the number of decimal places has nothing to do with what can be represented.

Try .1 * 3, or 162.295 / 10, or 24.0 + 47.98. Those fail for me in JS. But, 24.0 * 47.98 does not fail.

So to answer your three questions, any operation for any precision is potentially vulnerable. Whether a given input will or won't is a question I don't know how to answer, but I have a hunch there are a number of factors. 1) How close the actual answer is to the nearest binary fraction. 2) The precision in the engine performing the calculation. 3) The method used to perform the calculation (eg, multiplying by bit-shifting may give different results than multiplying by repeated addition)