Domanda

The IEEE 754 floating point value for 0.3 is around 0.29999999999999998.

Based on this, it seems like Python should print "0.29" when you execute "%0.2f" % (.3). But, it actually prints "0.30".

What did Python do behind the scenes to get that string representation?

EDIT: The very last sentence of Floating Point Arithmetic: Issues and Limitations is interesting:

In current versions, Python displays a value based on the shortest decimal fraction that rounds correctly back to the true binary value

È stato utile?

Soluzione

Rounding. Python rounds the number, and 0.29999 rounded to two decimal digits is 0.30.

Altri suggerimenti

Conversion to decimal is usually implemented “to the nearest”: if you request a decimal representation with two decimal places, you get the nearest decimal number in that format. For the conversion of the double represented as 0.29999999999999998, that number is 0.30.

Actually, the C language is a bit strange to have made fashionable to round “towards zero” for the conversion from float to integer, because there is no particular advantage in doing so either. But this has influenced many programming languages that came after it, including Python, to convert from float to integer by truncation. The conversion from float to integer and its truncation semantics should be considered the exception, and round-to-nearest the general case for all other conversions, including to and from decimal.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top