If you know the floating point format, you should have been able to work out the algorithm yourself.
- If the input is 0, the result is all 0 bits.
- If the input is negative, set the sign bit to 1, and complement the input.
- Find the highest bit set. Add the bias to its index, that's gonna be your exponent.
- Clear the highest bit set, what remains is the mantissa.
Since this question has been tagged assembly, here is a sample implementation for x86:
int_to_float:
xor eax, eax
mov edx, [esp+4]
test edx, edx
jz .done
jns .pos
or eax, 0x80000000 ; set sign bit
neg edx
.pos:
bsr ecx, edx
; shift the highest bit set into bit #23
sub ecx, 23
ror edx, cl ; works for cl < 0 too
and edx, 0x007fffff ; chop off highest bit
or eax, edx ; mantissa
add ecx, 127 + 23 ; bias
shl ecx, 23
or eax, ecx ; exponent
.done:
ret
Note: this returns the float in eax
, while the calling convention usually mandates st0
. I just wanted to avoid FPU code totally.