Floating point arithmetic not producing exact results [duplicate]

I need to do some floating point arithmetic in Java as shown in the code below:

public class TestMain {
    private static Map<Integer, Double> ccc = new HashMap<Integer, Double>() {
      { put(1, 0.01); put(2, 0.02); put(3, 0.05); put(4, 0.1); put(6, 0.2);
        put(10, 0.5); put(20, 1.0); put(30, 2.0); put(50, 5.0); put(100, 10.0);
      }
    };

    Double increment(Double i, boolean up) {
        Double inc = null;

        while (inc == null) {
            inc = ccc.get(i.intValue());

            if (up)
                --i;
            else
                ++i;
        }
        return inc;
    }

    public static void main(String[] args) {
        TestMain tt = new TestMain();

        for (double i = 1; i < 1000; i += tt.increment(i, true)) {
            System.out.print(i + ",");
        }
    }
}

This is to simulate the range of values given as output by the Betfair spinner widget.

Floating point arithmetic in Java seems to introduce some unexpected errors. For example, I get 2.180000000000001 instead of 2.18. What use are floating point numbers is you can't trust the results of arithmetic performed on them? How can I get around this issue?


If you need exact decimal values, you should use java.math.BigDecimal. Then read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for the background of why you're getting those results.

(I have a .NET-centric article which you may find easier to read - and certainly shorter. The differences between Java and .NET are mostly irrelevant for the purposes of understanding this issue.)


Floating point numbers use binary fractions and not decimal fractions. That is, you're used to decimal fractions made up of a tenths digit, a hundredths digit, a thousandths digit, etc. d1/10 + d2/100 + d3/1000 ... But floating point numbers are in binary, so they have a half digit, a quarter digit, an eighth digit, etc. d1/2 + d2/4 + d3/8 ...

Many decimal fractions cannot be expressed exactly in any finite number of binary digits. For example, 1/2 is no problem: in decimal it's .5, in binary it's .1. 3/4 is decimal .75, binary .11. But 1/10 is a clean .1 in decimal, but in binary it's .0001100110011... with the "0011" repeating forever. As the computer can store only a finite number of digits, at some point this has to get chopped off, so the answer is not precise. When we convert back to decimal on output, we get a strange-looking number.

As Jon Skeet says, if you need exact decimal fractions, use BigDecimal. If performance is an issue, you could roll your own decimal fractions. Like, if you know you always want exactly 3 decimal places and that the numbers will not be more than a million or so, you could simply use int's with an assumed 3 decimal places, making adjustments as necessary when you do arithmetic and writing an output format function to insert the decimal point in the right place. But 99% of the time performance isn't a big enough issue to be worth the trouble.


Floating-point numbers are imprecise, especially since they work in binary fractions (1/2, 1/4, 1/8, 1/16, 1/32, ...) instead of decimal fractions (1/10, 1/100, 1/1000, ...). Just define what you feel is "close enough" and use something like Math.abs(a-b) < 0.000001.


On a philosophical note, I wonder: Most computer CPUs today have built-in support for integer arithmetic and floating-point arithmetic, but no support for decimal arithmetic. Why not? I haven't written an application in years where floats were useable because of this rounding problem. You certainly can't use them for money amounts: No one wants to print a price on a sales receipt of "$42.3200003". No accountant is going to accept "we might be off by a penny here and there because we're using binary fractions and had rounding errors".

Floats are fine for measurements, like distance or temperature, where there's no such thing as an "exact answer" and you have to round off to the precision of your instruments at some point anyway. I suppose for people who are programming the computer in the chemistry lab, floats are used routinely. But for those of us in the business world, they're pretty much useless.

Back in those ancient days when I programmed on mainframes, the IBM 360 family of CPUs had built-in support for packed decimal arithmetic. They stored strings where each byte held two decimal digits, i.e. the first four bits had values from 0 to 9 and ditto the second four bits, and the CPU had arithmetic functions to manipulate them. Why can't Intel do something like that? Then Java could add a "decimal" data type and we wouldn't need all the extra junk.

I'm not saying to abolish floats, of course. Just add decimals.

Oh well, as great social movements go, I don't suppose this is one that is going to generate a lot of popular excitement or rioting in the streets.