Can someone explain this: 0.2 + 0.1 = 0.30000000000000004? [duplicate]
Duplicates:
How is floating point stored? When does it matter?Is floating point math broken?
Why does the following occur in the Python Interpreter?
>>> 0.1+0.1+0.1-0.3
5.551115123125783e-17
>>> 0.1+0.1
0.2
>>> 0.2+0.1
0.30000000000000004
>>> 0.3-0.3
0.0
>>> 0.2+0.1
0.30000000000000004
>>>
Why doesn't 0.2 + 0.1 = 0.3
?
Solution 1:
That's because .1
cannot be represented exactly in a binary floating point representation. If you try
>>> .1
Python will respond with .1
because it only prints up to a certain precision, but there's already a small round-off error. The same happens with .3
, but when you issue
>>> .2 + .1
0.30000000000000004
then the round-off errors in .2
and .1
accumulate. Also note:
>>> .2 + .1 == .3
False
Solution 2:
Not all floating point numbers are exactly representable on a finite machine. Neither 0.1 nor 0.2 are exactly representable in binary floating point. And nor is 0.3.
A number is exactly representable if it is of the form a/b where a and b are an integers and b is a power of 2. Obviously, the data type needs to have a large enough significand to store the number also.
I recommend Rob Kennedy's useful webpage as a nice tool to explore representability.