Javascript Math Error: Inexact Floats [duplicate]

Solution 1:

Floating point value is inexact.

This is pretty much the answer to the question. There is finite precision, which means that some numbers can not be represented exactly.

Some languages support arbitrary precision numeric types/rational/complex numbers at the language level, etc, but not Javascript. Neither does C nor Java.

The IEEE 754 standard floating point value can not represent e.g. 0.1 exactly. This is why numerical calculations with cents etc must be done very carefully. Sometimes the solution is to store values in cents as integers instead of in dollars as floating point values.


"Floating" point concept, analog in base 10

To see why floating point values are imprecise, consider the following analog:

  • You only have enough memory to remember 5 digits
  • You want to be able to represent values in as wide range as practically possible

In representing integers, you can represent values in the range of -99999 to +99999. Values outside of those range would require you to remember more than 5 digits, which (for the sake of this example) you can't do.

Now you may consider a fixed-point representation, something like abc.de. Now you can represent values in the range of -999.99 to +999.99, up to 2 digits of precision, e.g. 3.14, -456.78, etc.

Now consider a floating point version. In your resourcefulness, you came up with the following scheme:

n = abc x 10de

Now you can still remember only 5 digits a, b, c, d, e, but you can now represent much wider range of numbers, even non-integers. For example:

123 x 100 = 123.0

123 x 103 = 123,000.0

123 x 106 = 123,000,000.0

123 x 10-3 = 0.123

123 x 10-6 = 0.000123

This is how the name "floating point" came into being: the decimal point "floats around" in the above examples.

Now you can represent a wide range of numbers, but note that you can't represent 0.1234. Neither can you represent 123,001.0. In fact, there's a lot of values that you can't represent.

This is pretty much why floating point values are inexact. They can represent a wide range of values, but since you are limited to a fixed amount of memory, you must sacrifice precision for magnitude.


More technicalities

The abc is called the significand, aka coefficient/mantissa. The de is the exponent, aka scale/characteristics. As usual, the computer uses base 2 instead 10. In addition to remembering the "digits" (bits, really), it must also remember the signs of the significand and exponent.

A single precision floating point type usually uses 32 bits. A double precision usually uses 64 bits.

See also

  • What Every Computer Scientist Should Know About Floating-Point Arithmetic
  • Wikipedia/IEEE 754

Solution 2:

That behavior is inherent to floating point arithmic. That is why floating point arithmic is not suitable for dealing with money issues, which need to be exact.

There exist libraries, like this one, which help you limit rounding errors to the point where you actually need them (to represent as text). Those libraries don't really deal with floating point values, but with fractions (of integer values). So no 0.25, but 1/4 and so on.

Solution 3:

Floating point values can be used for efficiently representing values in a much wide range than integer values could. However, it comes at a price: some values cannot be represented exactly (because they are stored binary) Every negative power of 10 for example (0.1, 0.01, etc.)

If you want exact results, try not to use floating point arithmetic.

Of course sometimes you can't avoid them. In that case, a few simple guidelines may help you minimize roundoff errors:

  1. Don't subtract nearly equal values. (0.1-0.0999)
  2. Add or multiply the biggest values first. (100*10)* 0.1 instead of 100*(10*0.1)
  3. Multiply first, then divide. (14900*10.8)/100 instead of 14900*(10.8/100)
  4. If exact values are available, use them instead of calculating them to get 'prettier' code