Question

Suppose I'm using a 32-bit float to store a bit-string (don't ask). Suppose further I'd like to serialize this float to a file (as a float), and will employ banker's rounding on the decimal representation of the float before serializing. When I read the float back into the program, the system will (naturally) store it in a 32-bit float that is as close as possible to the serialized number.

How precise, in terms of digits, must my serialized float be, after banker's rounding, to ensure that the float serialized is equivalent in binary to the float that is read back in?

Was it helpful?

Solution

If your question is how many decimal digits do you need to ensure that conversion to decimal and back to IEEE 754 single precision produces the original value, then it is answered in this answer. That assumes the software doing the formatting and interpretation is good (the language standard might not require it).

In particular, the fifth item in note 1 on page 32 of the IEEE 754-2008 standard supports that answer, 9 digits for single and 17 digits for double:

Conversions from a supported binary format bf to an external character sequence and back again results in a copy of the original number so long as there are at least Pmin (bf ) significant digits specified and the rounding-direction attributes in effect during the two conversions are round to nearest rounding-direction attributes.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top