int((0.1+0.7)*10) = 7 in several languages. How to prevent this?
Recently I came across a bug/feature in several languages. I have a very basic knowledge about how it's caused (and I'd like some detailed explanation), but when I think of all the bugs I must have made over the years, the question is how can I determine "Hey, this might cause a riddiculous bug, I'd better use arbitrary precision functions", what other languages do have this bug (and those who don't, why). Also, why 0.1+0.7 does this and i.e. 0.1+0.3 doesn't, are there any other well-known examples?
PHP
//the first one actually doesn't make any sense to me,
//why 7 after typecast if it's represented internally as 8?
debug_zval_dump((0.1+0.7)*10); //double(8) refcount(1)
debug_zval_dump((int)((0.1+0.7)*10)); //long(7) refcount(1)
debug_zval_dump((float)((0.1+0.7)*10)); //double(8) refcount(1)
Python:
>>> ((0.1+0.7)*10)
7.9999999999999991
>>> int((0.1+0.7)*10)
7
Javascript:
alert((0.1+0.7)*10); //7.999999999999999
alert(parseInt((0.7+0.1)*10)); //7
Ruby:
>> ((0.1+0.7)*10).to_i
=> 7
>>((0.1+0.7)*10)
=> 7.999999999999999
Solution 1:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Solution 2:
It's not a language issue. It's general issue with float point arithmetic.
Solution 3:
Stop using floats. No, really.