What is the internal representation of inf and NaN?
-
11-07-2019 - |
Question
A friend & I were debating how Inf's and NaN's are stored during lunch today.
Take Fortran 90 for example. 4-byte reals can obtain the value of Inf or NaN. How is this stored internally? Presumably, a 4-byte real is a number represented internally by a 32 digit binary number. Are Inf's and NaN's stored as 33 bit binary numbers?
Solution
Specifically from Pesto's link:
The IEEE single precision floating point standard representation requires a 32 bit word, which may be represented as numbered from 0 to 31, left to right. The first bit is the sign bit, S
, the next eight bits are the exponent bits, 'E
', and the final 23 bits are the fraction 'F
':
S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF 0 1 8 9 31
The value V
represented by the word may be determined as follows:
- If
E=255
andF
is nonzero, thenV=NaN
("Not a number") - If
E=255
andF
is zero andS
is1
, thenV=-Infinity
- If
E=255
andF
is zero andS
is0
, thenV=Infinity
- If
0<E<255
thenV=(-1)**S * 2 ** (E-127) * (1.F)
where "1.F
" is intended to represent the binary number created by prefixing F with an implicit leading 1 and a binary point. - If
E=0
andF
is nonzero, thenV=(-1)**S * 2 ** (-126) * (0.F)
These are "unnormalized" values. - If
E=0
andF
is zero andS
is1
, thenV=-0
- If
E=0
andF
is zero andS
is0
, thenV=0
OTHER TIPS
Most floating point representations are based upon the IEEE standard, which has set patterns defined for Inf and NaN.