Java: Inaccuracy using double [duplicate]

Possible Duplicate:
Retain precision with Doubles in java
Strange floating-point behaviour in a Java program

I'm making a histogram class, and I'm encountering a weird issue.

Here are the basics of the class, there are more methods but they aren't relevant to the issue.

private int[] counters;
private int numCounters;
private double min, max, width;

public Histogram(double botRange, double topRange, int numCounters) {
    counters = new int[numCounters];
    this.numCounters = numCounters;
    min = botRange;
    max = topRange;
    width = (max - min) / (double) numCounters;
}

public void plotFrequency() {
    for (int i = 0; i < counters.length; i++) {
        writeLimit(i * width, (i + 1) * width);
        System.out.println(counters[i]);
    }
}

private void writeLimit(double start, double end) {
    System.out.print(start + " <= x < " + end + "\t\t");
}

the problem happens when I plot the frequencies. I've created 2 instances. new Histogram(0, 1, 10); new Histogram(0, 10, 10);

This is what they output.

Frequecy
0.0 <= x < 0.1      989
0.1 <= x < 0.2      1008
0.2 <= x < 0.30000000000000004      1007
0.30000000000000004 <= x < 0.4      1044
0.4 <= x < 0.5      981
0.5 <= x < 0.6000000000000001       997
0.6000000000000001 <= x < 0.7000000000000001        1005
0.7000000000000001 <= x < 0.8       988
0.8 <= x < 0.9      1003
0.9 <= x < 1.0      978

Frequecy
0.0 <= x < 1.0      990
1.0 <= x < 2.0      967
2.0 <= x < 3.0      1076
3.0 <= x < 4.0      1048
4.0 <= x < 5.0      971
5.0 <= x < 6.0      973
6.0 <= x < 7.0      1002
7.0 <= x < 8.0      988
8.0 <= x < 9.0      1003
9.0 <= x < 10.0     982    

So my question is, why am I getting the really long decimal limits in the first example, but not the second one?


doubles are not exact.

It is because there are infinite possible real numbers and only finite number of bits to represent these numbers.

have a look at: what every programmer should know about floating point arithmetic


From The Floating-Point Guide:

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

That accounts for your first example. The second one only involves integers, not fractions, and integers can be represented exactly in the binary floating-point format (up to 52 bits).


Some decimals cannot be exactly represented by double values. 0.3 is one of those values.

All integer values less than a certain number (I forget which) happen to have an exact representation by a double value, so you don't see the approximation.

Consider how we think of numbers: the number 123 is represented as (1 * 100) + (2 * 10) + (3 * 1). We use 10 as our base. Binary numbers use two. So when you look at fractions of a number, how could you represent 0.3 by adding individual powers of 2? You can't. The best you can come up with is about 0.30000000000000004 (I'd have to see the exact binary digits to see how it reaches that).