Question

Let's say I have a decimal floating-point number, i.e. a mantissa and precision (negative exponent), both represented as integers. How do I convert this into a binary floating point number (of the sort you find in most programming languages, with only the standard operations available). This should be done with maximum precision, ideally. The naive method of simply calculating $m \cdot 10^e$, where $m$ is the mantissa and $e$ the exponent, turns out to be very imprecise when $e$ is large, unfortunately.

I was also wondering how to do the reverse. I suspect the algorithm can't simply be reversed, due to the different representations.

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top