Python 3 Float Decimal Points/Precision
I am reading a text file with floating point numbers, all with either 1 or 2 decimal points. I am using float()
to convert a line into a float, and raising a ValueError
if that fails. I am storing all floats in a list. When printing it out, I'd like to print it out as a 2 decimal places floating point.
Assume I have a text file with the numbers -3,65, 9,17, 1. I read each one, and once I convert them to float and append them to a list. Now in Python 2, calling float(-3.65)
returns -3.65
. In Python 3 however, float(-3.65) returns
-3.6499999999999999` which loses its precision.
I want to print the list of floats, [-3.6499999999999999, 9.1699999999999999, 1.0]
with 2 decimal points only. Doing something along the lines of '%.1f' % round(n, 1)
would return a string. How can I return a list of all two decimal points of floats, and not strings? So far, I rounded it using [round(num, 2) for num in list]
but would need to set the decimal points / precision instead of round()
.
The comments state the objective is to print to 2 decimal places.
There's a simple answer for Python 3:
>>> num=3.65
>>> "The number is {:.2f}".format(num)
'The number is 3.65'
or equivalently with f-strings (Python 3.6+):
>>> num = 3.65
>>> f"The number is {num:.2f}"
'The number is 3.65'
As always, the float value is an approximation:
>>> "{}".format(num)
'3.65'
>>> "{:.10f}".format(num)
'3.6500000000'
>>> "{:.20f}".format(num)
'3.64999999999999991118'
I think most use cases will want to work with floats and then only print to a specific precision.
Those that want the numbers themselves to be stored to exactly 2 decimal digits of precision, I suggest use the decimal type. More reading on floating point precision for those that are interested.
The simple way to do this is by using the round buit-in.
round(2.6463636263,2)
would be displayed as 2.65
.
In a word, you can't.
3.65
cannot be represented exactly as a float
. The number that you're getting is the nearest number to 3.65
that has an exact float
representation.
The difference between (older?) Python 2 and 3 is purely due to the default formatting.
I am seeing the following both in Python 2.7.3 and 3.3.0:
In [1]: 3.65
Out[1]: 3.65
In [2]: '%.20f' % 3.65
Out[2]: '3.64999999999999991118'
For an exact decimal datatype, see decimal.Decimal
.