Question

Why is such a “weird” register size used? Is there any documentation on why it is not preferable to use 64 or 128 bits for those registers?

Was it helpful?

Solution

On the Wikipedia page on the IEEE 754-1985 standard there is a pretty good explanation regarding the 80-bit extended format:

"The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors"

A double precision floating point number is represented in 64 bits. You would want a few more bits to get higher precision for intermediate results, but it would be overkill to use a 128 bit type when you only want 64 bits in the final result.

80 bits is a reasonably even number of bits that is larger than 64 bits.

Consider that the data bus at the time when those standards were established was 8 or 16 bits, not 32 or 64 bits like today. If the standard was written today 96 bits would be a more reasonable number, or perhaps the data would be transmitted as 128 bits even if all those bits wouldn't be used in the calculations.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top