Question

I know that this may be a common question, but I have never found an answer (and maybe it has to do with not knowing how to search google correctly, so if somebody can point me to a reference, I will remove this question).

Why do doubles work how they do in terms of representing the right side of the decimal with an inverse power of 2, and the left side of the decimal with a power of 2? I know that it allows very large numbers to be represented, but are there any other advantages? The .NET framework has the decimal data structure available, which seems much more logical to use becuase it is how we represent numbers in human notation.

Really, my question is why doubles were created the way they were instead of initially creating something like decimal instead (which seems to be far less common).

Was it helpful?

Solution

Your confusion seems to be unfounded. The right side of the decimal point is always represented in an inverse power of the base and the left side is always represented as a power of the base. This is true for base 10 and base 2 as well. Binary floating point numbers store an exponent that controls where the decimal point is on the mantissa.

As for why they exist: binary floating point notation has two convenient properties:

  1. It is relatively fast, because it uses binary arithmetic
  2. It can represent either very large or very small numbers with certain accuracy.

Those properties make them pretty good for e.g. physical calculations, because a small error in the last place doesn't matter much, but make them unusable for monetary applications (where you want decimal, despite it being much slower for computation).

OTHER TIPS

The FP format packs that maximum amount of precision into one-"word" or two-"word" objects while also adding an exponent so that scientific calculations1. involving large or small values can be conducted with equal precision. Because the objects fit in words, they can fit into registers, and they are supported in CPU HW and on GPU units, so they are really really fast.2.

The decimal formats are slower, larger, they are almost never supported by hardware, but they also don't need elaborate quadratic calculations, so that doesn't matter. We can count beans in software easily. The one advantage decimal strings have is that the numbers we write in real life (0.10, 0.11, 0.12, ...) can be exactly represented, and that really helps for accounting. (Strangely, because of our use of base 10 IRL, almost all of the numbers we write in commerce actually cannot be represented in base 2 if they have a fraction.)

Either format can be used for the opposite application with enough kludges and careful programming, but there wouldn't be much point in it.

1. It turns out that even tho the precision is limited, no physical constant is known to even nearly the precision of the double data type. So, they really are exactly what is needed for these types of calculations.

2. Fast beyond belief these days. Every GPU would rank as the world's fastest supercomputer CPU if you could just take it back in time a few years.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top