Question

I am struggeling with denormalized numbers.

I know that:

Essentially, a denormalized float has the ability to represent the SMALLEST (in magnitude) number that is possible to be represented with any floating point value.

I also know that numbers can be represented like that:

enter image description here

However where I am stuck is the actual computation of the de-/normalized number?

Is there a method to do that? Are there any special numbers?

Would appreciate your answer!

Was it helpful?

Solution

“Subnormal” is the term used in the IEEE 754 standard.

There are no subnormal numbers greater than 1; subnormal numbers are small (tinier than the normal numbers).

The minimum normal exponent is -1022 (encoded as the bits 00000000001, since the exponent encoding is biased by 1023). Subnormal numbers have a lower exponent encoding, encoded as all zero bits 00000000000. (Although the encoding is 0, the exponent it represents is the same for encoding 1, −1 1022. The exponent encoding 0 indicates the leading bit of the significand is 0 instead of 1.)

The value of a subnormal number is the significand (fraction part) multiplied by 2-1022, with the sign bit applied (0 for positive, 1 for negative). The significand is formed as a leading 0, then the radix point “.”, then the bits of the significand field. So, if the significand field contains 0101010101010101010101010101010101010101010101010101, then the significand value is (in binary) 0.01010101010101010101010101010101010101010101010101012.

If the significand field is completely zero, the value is zero, and the number is generally not considered subnormal. The smallest positive subnormal number has a 1 in its lowest bit and zeros in all other bits. Its value is 0.00000000000000000000000000000000000000000000000000012•2-1022, which is 2-52•2-1022 = 2-1074.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top