How to avoid floating point precision errors with floats or doubles in Java?

I have a very annoying problem with long sums of floats or doubles in Java. Basically the idea is that if I execute:

for ( float value = 0.0f; value < 1.0f; value += 0.1f )
    System.out.println( value );

What I get is:

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.70000005
0.8000001
0.9000001

I understand that there is an accumulation of the floating precision error, however, how to get rid of this? I tried using doubles to half the error, but the result is still the same.

Any ideas?


There is a no exact representation of 0.1 as a float or double. Because of this representation error the results are slightly different from what you expected.

A couple of approaches you can use:

  • When using the double type, only display as many digits as you need. When checking for equality allow for a small tolerance either way.
  • Alternatively use a type that allows you to store the numbers you are trying to represent exactly, for example BigDecimal can represent 0.1 exactly.

Example code for BigDecimal:

BigDecimal step = new BigDecimal("0.1");
for (BigDecimal value = BigDecimal.ZERO;
     value.compareTo(BigDecimal.ONE) < 0;
     value = value.add(step)) {
    System.out.println(value);
}

See it online: ideone


You can avoid this specific problem using classes like BigDecimal. float and double, being IEEE 754 floating-point, are not designed to be perfectly accurate, they're designed to be fast. But note Jon's point below: BigDecimal can't represent "one third" accurately, any more than double can represent "one tenth" accurately. But for (say) financial calculations, BigDecimal and classes like it tend to be the way to go, because they can represent numbers in the way that we humans tend to think about them.


Don't use float/double in an iterator as this maximises your rounding error. If you just use the following

for (int i = 0; i < 10; i++)
    System.out.println(i / 10.0);

it prints

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9

I know BigDecimal is a popular choice, but I prefer double not because its much faster but its usually much shorter/cleaner to understand.

If you count the number of symbols as a measure of code complexity

  • using double => 11 symbols
  • use BigDecimal (from @Mark Byers example) => 21 symbols

BTW: don't use float unless there is a really good reason to not use double.


It's not just an accumulated error (and has absolutely nothing to do with Java). 1.0f, once translated to actual code, does not have the value 0.1 - you already get a rounding error.

From The Floating-Point Guide:

What can I do to avoid this problem?

That depends on what kind of calculations you’re doing.

  • If you really need your results to add up exactly, especially when you work with money: use a special decimal datatype.
  • If you just don’t want to see all those extra decimal places: simply format your result rounded to a fixed number of decimal places when displaying it.
  • If you have no decimal datatype available, an alternative is to work with integers, e.g. do money calculations entirely in cents. But this is more work and has some drawbacks.

Read the linked-to site for detailed information.