I was trying to normalize a set of numbers from -100 to 0 to a range of 10-100 and was having problems only to notice that even with no variables at all, this does not evaluate the way I would expect it to:

>>> (20-10) / (100-10)
0

Float division doesn't work either:

>>> float((20-10) / (100-10))
0.0

If either side of the division is cast to a float it will work:

>>> (20-10) / float((100-10))
0.1111111111111111

Each side in the first example is evaluating as an int which means the final answer will be cast to an int. Since 0.111 is less than .5, it rounds to 0. It is not transparent in my opinion, but I guess that's the way it is.

What is the explanation?


Solution 1:

You're using Python 2.x, where integer divisions will truncate instead of becoming a floating point number.

>>> 1 / 2
0

You should make one of them a float:

>>> float(10 - 20) / (100 - 10)
-0.1111111111111111

or from __future__ import division, which the forces / to adopt Python 3.x's behavior that always returns a float.

>>> from __future__ import division
>>> (10 - 20) / (100 - 10)
-0.1111111111111111

Solution 2:

You're putting Integers in so Python is giving you an integer back:

>>> 10 / 90
0

If if you cast this to a float afterwards the rounding will have already been done, in other words, 0 integer will always become 0 float.

If you use floats on either side of the division then Python will give you the answer you expect.

>>> 10 / 90.0
0.1111111111111111

So in your case:

>>> float(20-10) / (100-10)
0.1111111111111111
>>> (20-10) / float(100-10)
0.1111111111111111