Question

Here is an example:

>>> "%.2f" % 0.355
'0.35'
>>> "%.2f" % (float('0.00355') *100)
'0.36'

Why they give different result?

Was it helpful?

Solution

This isn't a format bug. This is just floating point arithmetic. Look at the values underlaying your format commands:

In [18]: float('0.00355')
Out[18]: 0.0035500000000000002

In [19]: float('0.00355')*100
Out[19]: 0.35500000000000004

In [20]: 0.355
Out[20]: 0.35499999999999998

The two expressions create different values.

I don't know if it's available in 2.4 but you can use the decimal module to make this work:

>>> import decimal
>>> "%.2f" % (decimal.Decimal('0.00355')*100)
'0.35'

The decimal module treats floats as strings to keep arbitrary precision.

OTHER TIPS

Because, as with all floating point "inaccuracy" questions, not every real number can be represented in a limited number of bits.

Even if we were to go nuts and have 65536-bit floating point formats, the number of numbers between 0 and 1 is still, ... well, infinite :-)

What's almost certainly happening is that the first one is slightly below 0.355 (say, 0.3549999999999) while the second is slightly above (say, 0.3550000001).

See here for some further reading on the subject.

A good tool to play with to see how floating point numbers work is Harald Schmidt's excellent on-line converter. This was so handy, I actually implemented my own C# one as well, capable of handling both IEEE754 single and double precision.

Arithmetic with floating point numbers is often inaccurate.

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top