Python modulo on floats
Solution 1:
Actually, it's not true that 3.5 % 0.1
is 0.1
. You can test this very easily:
>>> print(3.5 % 0.1)
0.1
>>> print(3.5 % 0.1 == 0.1)
False
In actuality, on most systems, 3.5 % 0.1
is 0.099999999999999811
. But, on some versions of Python, str(0.099999999999999811)
is 0.1
:
>>> 3.5 % 0.1
0.099999999999999811
>>> repr(3.5 % 0.1)
'0.099999999999999811'
>>> str(3.5 % 0.1)
'0.1'
Now, you're probably wondering why 3.5 % 0.1
is 0.099999999999999811
instead of 0.0
. That's because of the usual floating point rounding issues. If you haven't read What Every Computer Scientist Should Know About Floating-Point Arithmetic, you should—or at least the brief Wikipedia summary of this particular issue.
Note also that 3.5/0.1
is not 34
, it's 35
. So, 3.5/0.1 * 0.1 + 3.5%0.1
is 3.5999999999999996
, which isn't even close to 3.5
. This is pretty much fundamental to the definition of modulus, and it's wrong in Python, and just about every other programming language.
But Python 3 comes to the rescue there. Most people who know about //
know that it's how you do "integer division" between integers, but don't realize that it's how you do modulus-compatible division between any types. 3.5//0.1
is 34.0
, so 3.5//0.1 * 0.1 + 3.5%0.1
is (at least within a small rounding error of) 3.5
. This has been backported to 2.x, so (depending on your exact version and platform) you may be able to rely on this. And, if not, you can use divmod(3.5, 0.1)
, which returns (within rounding error) (34.0, 0.09999999999999981)
all the way back into the mists of time. Of course you still expected this to be (35.0, 0.0)
, not (34.0, almost-0.1)
, but you can't have that because of rounding errors.
If you're looking for a quick fix, consider using the Decimal
type:
>>> from decimal import Decimal
>>> Decimal('3.5') % Decimal('0.1')
Decimal('0.0')
>>> print(Decimal('3.5') % Decimal('0.1'))
0.0
>>> (Decimal(7)/2) % (Decimal(1)/10)
Decimal('0.0')
This isn't a magical panacea — for example, you'll still have to deal with rounding error whenever the exact value of an operation isn't finitely representable in base 10 - but the rounding errors line up better with the cases human intuition expects to be problematic. (There are also advantages to Decimal
over float
in that you can specify explicit precisions, track significant digits, etc., and in that it's actually the same in all Python versions from 2.4 to 3.3, while details about float
have changed twice in the same time. It's just that it's not perfect, because that would be impossible.) But when you know in advance that your numbers are all exactly representable in base 10, and they don't need more digits than the precision you've configured, it will work.
Solution 2:
Modulo gives you the rest
of a division. 3.5
divided by 0.1
should give you 35
with a rest of 0
. But since floats are based on powers of two the numbers are not exact and you get rounding errors.
If you need your division of decimal numbers to be exact use the decimal module:
>>> from decimal import Decimal
>>> Decimal('3.5') / Decimal('0.1')
Decimal('35')
>>> Decimal('3.5') % Decimal('0.1')
Decimal('0.0')
As I am being bashed that my answer is misleading here comes the whole story:
The Python float 0.1
is slightly larger than one-tenth:
>>> '%.50f' % 0.1
'0.10000000000000000555111512312578270211815834045410'
If you divide the float 3.5
by such number you get a rest of almost 0.1
.
Let's start with the number 0.11
and continue adding zeros in between the two 1
digits in order to make it smaller while keeping it larger than 0.1
.
>>> '%.10f' % (3.5 % 0.101)
'0.0660000000'
>>> '%.10f' % (3.5 % 0.1001)
'0.0966000000'
>>> '%.10f' % (3.5 % 0.10001)
'0.0996600000'
>>> '%.10f' % (3.5 % 0.100001)
'0.0999660000'
>>> '%.10f' % (3.5 % 0.1000001)
'0.0999966000'
>>> '%.10f' % (3.5 % 0.10000001)
'0.0999996600'
>>> '%.10f' % (3.5 % 0.100000001)
'0.0999999660'
>>> '%.10f' % (3.5 % 0.1000000001)
'0.0999999966'
>>> '%.10f' % (3.5 % 0.10000000001)
'0.0999999997'
>>> '%.10f' % (3.5 % 0.100000000001)
'0.1000000000'
The last line gives the impression that we finally have reached 0.1
but changing the format strings reveals the true nature:
>>> '%.20f' % (3.5 % 0.100000000001)
'0.09999999996600009156'
The default float format of python simply does not show enough precision so that the 3.5 % 0.1 = 0.1
and 3.5 % 0.1 = 35.0
. It really is 3.5 % 0.100000... = 0.999999...
and 3.5 / 0.100000... = 34.999999....
. In case of the division you even end up with the exact result as 34.9999...
is ultimatively rounded up to 35.0
.
Fun fact: If you use a number that is slightly smaller than 0.1
and perform the same operation you end up with a number that is slightly larger than 0
:
>>> 1.0 - 0.9
0.09999999999999998
>>> 35.0 % (1.0 - 0.9)
7.771561172376096e-15
>>> '%.20f' % (35.0 % (1.0 - 0.9))
'0.00000000000000777156'
Using C++ you can even show that 3.5
divided by the float 0.1
is not 35
but something a little smaller.
#include <iostream>
#include <iomanip>
int main(int argc, char *argv[]) {
// double/float, rounding errors do not cancel out
std::cout << "double/float: " << std::setprecision(20) << 3.5 / 0.1f << std::endl;
// double/double, rounding errors cancel out
std::cout << "double/double: " << std::setprecision(20) << 3.5 / 0.1 << std::endl;
return 0;
}
http://ideone.com/fTNVho
In Python 3.5 / 0.1
gives you the exact result of 35
because the rounding errors cancel out each other. It really is 3.5 / 0.100000... = 34.9999999...
. And 34.9999...
is ultimatively so long that you end up with exactly 35
. The C++ program shows this nicely as you can mix double and float and play with the precisions of the floating point numbers.
Solution 3:
It has to do with the inexact nature of floating point arithmetic. 3.5 % 0.1
gets me 0.099999999999999811
, so Python is thinking that 0.1 divides into 3.5 at most 34 times, with 0.099999999999999811 left over. I'm not sure exactly what algorithm is being used to achieve this result, but that's the gist.