Question

For Integers, the bit representation can cover every possible integers between the min_int and max_int.

For Floats, it is not possible, because between any ranges, the number of floats is infinity.

I understand this, no problem.

The smallest positive float is min_float, so I assume any float in PC should theoretically be k * min_float and k is an integer, basically, min_float is the smallest unit. am I right?


One comment below https://stackoverflow.com/a/22939681/759076 says

due to the floating point representations - there are more 'small' numbers than 'big' ones according to the standard.

I can't really understand it. Why there are more small numbers than big ones? IEEE 745 actually categories floats into units, why it will cause this value bias?

Was it helpful?

Solution

The commenter was correct - there are more representable small numbers than big.

For IEEE 754 floating point numbers, you translate the fraction bits into a value in the range 1 ≤ f < 2, then multiply it by the raised exponent. Wikipedia's explanation is pretty good. I'll discuss 64-bit doubles here because it's easy to stick to one type.

For simplicity, let's say the equation is f × 2e-1023, ignoring the specifics of how f is derived from the raw bits, and the sign bit. For the rest of this answer, assume that the sign bit doesn't exist - it doesn't actually matter because we're talking about the magnitude of values, and not the literal "largeness" or "smallness" in relation to positive and negative numbers.

The important part here is that the exponent data is translated as 2e-1023, where e is the 11-bit exponent. The possible raw values for e are between 0 and 2047, which translates to an exponent range of 2-1023 to 21024.

Now, since all fraction values are within the range 1 ≤ f < 2, it holds that there must be 1024 possible values ≤2 for any chosen value of f, and 1024 possible values >2 for any chosen value of f.

As such, it's relatively trivial to count them:

  • 252 × 1024 = 4,611,686,018,427,387,904 possible values ≤ 2.
  • 252 × 1024 = 4,611,686,018,427,387,904 possible values > 2.

EDIT: Actually, it's even better than I thought... for any fixed value of f, there are 2048 possible exponent values. When exponent is 1023, the computed exponent (21023-1023) becomes 20, which is 1. Since f × 1 is just f, that means that the 50% bound is actually at f, not 2. So for any given value of f, the probability of you randomly picking an exponent that gives you a result less than or equal to f is 50%.

Note that we're considering the magnitude of the value, not the literal value, so negatives don't exist here. If you want to bring back the sign bit, just double the counts for each, as you'll have equally many negative numbers with these magnitudes.

Now consider that the range of values in that first set is literally just 0 to f, whereas the range in the second set is f to the maximum possible value that your float type can store. As such, representable numbers will always be a lot more dense in the "small" number range.

To make it 100% clear: 50% of all representable floating point numbers are in the range -f to +f, and all other numbers constitute the remaining 50%.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top