Double vs. BigDecimal?
A BigDecimal
is an exact way of representing numbers. A Double
has a certain precision. Working with doubles of various magnitudes (say d1=1000.0
and d2=0.001
) could result in the 0.001
being dropped alltogether when summing as the difference in magnitude is so large. With BigDecimal
this would not happen.
The disadvantage of BigDecimal
is that it's slower, and it's a bit more difficult to program algorithms that way (due to +
-
*
and /
not being overloaded).
If you are dealing with money, or precision is a must, use BigDecimal
. Otherwise Doubles
tend to be good enough.
I do recommend reading the javadoc of BigDecimal
as they do explain things better than I do here :)
My English is not good so I'll just write a simple example here.
double a = 0.02;
double b = 0.03;
double c = b - a;
System.out.println(c);
BigDecimal _a = new BigDecimal("0.02");
BigDecimal _b = new BigDecimal("0.03");
BigDecimal _c = _b.subtract(_a);
System.out.println(_c);
Program output:
0.009999999999999998
0.01
Does anyone still want to use double? ;)
There are two main differences from double:
- Arbitrary precision, similarly to BigInteger they can contain number of arbitrary precision and size (whereas a double has a fixed number of bits)
- Base 10 instead of Base 2, a BigDecimal is
n*10^-scale
where n is an arbitrary large signed integer and scale can be thought of as the number of digits to move the decimal point left or right
It is still not true to say that BigDecimal can represent any number. But two reasons you should use BigDecimal for monetary calculations are:
- It can represent all numbers that can be represented in decimal notion and that includes virtually all numbers in the monetary world (you never transfer 1/3 $ to someone).
- The precision can be controlled to avoid accumulated errors. With a
double
, as the magnitude of the value increases, its precision decreases and this can introduce significant error into the result.
If you write down a fractional value like 1 / 7
as decimal value you get
1/7 = 0.142857142857142857142857142857142857142857...
with an infinite sequence of 142857
. Since you can only write a finite number of digits you will inevitably introduce a rounding (or truncation) error.
Numbers like 1/10
or 1/100
expressed as binary numbers with a fractional part also have an infinite number of digits after the decimal point:
1/10 = binary 0.0001100110011001100110011001100110...
Doubles
store values as binary and therefore might introduce an error solely by converting a decimal number to a binary number, without even doing any arithmetic.
Decimal numbers (like BigDecimal
), on the other hand, store each decimal digit as is (binary coded, but each decimal on its own). This means that a decimal type is not more precise than a binary floating point or fixed point type in a general sense (i.e. it cannot store 1/7
without loss of precision), but it is more accurate for numbers that have a finite number of decimal digits as is often the case for money calculations.
Java's BigDecimal
has the additional advantage that it can have an arbitrary (but finite) number of digits on both sides of the decimal point, limited only by the available memory.