Question

If double is a 64 bit IEEE-754 type and long double is either an 80 or 128 bit IEEE-754 type, what is the algorithm that is used by the hardware (or the compiler?) in order to perform the conversion:

double d = 3.14159;
long double ld = (long double) d;

Also, it would be amazing if someone could list a source for the algorithm, as I've had no luck finding one thus far.

Was it helpful?

Solution

For normal numbers like 3.14159, the procedure is as follows:

separate the number into sign, biased exponent, and significand
add the difference in the exponent biases for long double and double
    (0x3fff - 0x3ff) to the exponent.
assemble the sign, new exponent, and significand (remembering to make the
    leading bit explicit in the Intel 80-bit format).

In practice, on common hardware with the Intel 80-bit format, the “conversion” is just a load instruction to the x87 stack (FLD). One rarely needs to muck around with the actual representation details, unless targeting a platform without hardware support.

OTHER TIPS

It's defined in the C Standard - google for N1570 to find a copy of the latest free draft. Since all "double" values can be represented in "long double", the result is a long double with the same value. I don't think you will find a precise description of the algorithm that the hardware uses, but it's quite straightforward and obvious if you look at the data formats:

Examine the exponent and mantissa bits to find if the number is Infinity, NaN, a normalized number, a denormalised number or a zero, produce a long double Infinity or NaN when needed, adjust the exponent of normalized numbers and shift the mantissa bits into the right place, adding an implicit highest mantissa bit, convert denormalised numbers to normalised numbers, and zeroes to long double zeroes.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top