A float
is inherently imprecise in pretty much every language because it cannot be represented precisely in binary.
If you need exact precision use the Decimal
class
:
from decimal import Decimal
num1 = Decimal("0.3")
num2 = Decimal("0.2")
num3 = Decimal("0.1")
print(sum([num1, num2, num3]))
Which will return
the very pleasing result of:
Decimal('0.6') # One can do float() on this output to get plain (0.6).
Which conveniently is also a Decimal
object with which you can work.