So here's an answer to expand on the comment I made earlier. I hope you don't mind that I'm using Python, since I know where to find everything I need in that language; maybe someone else can translate this into a suitable answer in C#.
Suppose that you've got a sequence of 128 bits representing a number in IEEE 754 binary128 format, and that we've currently read those 128 bits in in the form of an unsigned integer x
. For example:
>>> x = 0x4126f07c18386f74e697bd57a865a9d0
(I guess this would be a bit messier in C#, since as far as I can tell it doesn't have a 128-bit integer type; you'd need to either use two 64-bit integers for the high and low words, or use the BigInteger
type.)
We can extract the exponent and significand via bit operations as usual (I'm assuming that you already got this far, but I wanted to include the computation for completeness):
>>> significand_mask = (1 << 112) - 1
>>> exponent_mask = (1 << 127) - (1 << 112)
>>> trailing_significand = x & significand_mask
>>> significand = 1.0 + float(trailing_significand) / (2.0**112)
>>> biased_exponent = (x & exponent_mask) >> 112
>>> exponent = biased_exponent - 16383
Note that while the exponent is exact, we've lost most of the precision of significand
at this point, keeping only 52-53 bits of precision.
>>> significand
1.9393935334951098
>>> exponent
295
So the value represented is around 1.9393935334951098 * 2**295
, or around 1.234567e+89
. But you can't do the computation directly at this stage because it might overflow a Double
(in this case it doesn't, but if the exponent were bigger you'd have a problem). So here's where the logs come in: let's compute the natural log of the value represented by x
:
>>> from math import log, exp
>>> log_of_value = log(significand) + exponent*log(2)
>>> log_of_value
205.14079357778544
Then we can divide by log(10)
to get the exponent and mantissa for the decimal part: the quotient of the division gives the decimal exponent, while the remainder gives the log of the significand, so we have to apply exp
to it to retrieve the actual significand:
>>> exp10, mantissa10 = divmod(log_of_value, log(10))
>>> exp10
89.0
>>> significand10 = exp(mantissa10)
>>> significand10
1.234566999999967
And formatting the answer nicely:
>>> print("{:.10f}e{:+d}".format(significand10, int(exp10)))
1.2345670000e+89
That's the basic idea: to do this generally you'd also need to handle the sign bit and the special bit patterns for zeros, subnormal numbers, infinities and NaNs. Depending on the application, you may not need all of those.
There's some precision loss involved firstly in the conversion of the integer significand to a double precision float, but also in taking logs and exponents. The worst case for precision loss occurs when the exponent is large, since a large exponent magnifies the absolute error involved in the log(2)
computation, which in turn contributes a larger relative error when taking exp
to get the final significand. But since the (unbiased) exponent doesn't exceed 16384, it's not hard to bound the error. I haven't done the formal computations, but this should be good for around 12 digits of precision across the range of the binary128
format, and precision should be a bit better for numbers with small exponent.