Pergunta

I have a single-precision float value, and no information about the distribution of the samples from which this value was generated, so I can't apply a sigmoid or perform some kind of normalization. Also, I know the value will always be non-negative. What is the best way to represent this float as a byte?

I've thought of the following:

Interpret the float as a UInt32 (I expect this to maintain relative ordering between numbers, please correct me if I'm wrong) and then scale it to the range of a byte.

UInt32 uVal = BitConverter.ToUInt32(BitConverter.GetBytes(fVal), 0);
byte bVal = Convert.ToByte(uVal * Byte.MaxValue / UInt32.MaxValue);

I'd appreciate your comments and any other suggestions. Thanks!

Foi útil?

Solução

You have to assume a distribution. You have no choice. Somehow you have to partition the float values and assign them to byte values.

If the distribution is assumed to be linear arithmetic then the space is roughly 0 to 3.4e38. Each increment in the byte value would have a weight of about +1.3e36.

If the distribution is assumed to be linear geometric then the space is roughly 2.3e83. Each increment in the byte value would have a weight of about x2.1.

You can derive these values by simple arithmetic. The first is maxfloat/256. The second is the 256th root of (maxfloat/minfloat).

Your proposal to use and scale the raw bit pattern will produce a lumpy distribution in which numbers with different exponents are grouped together while numbers with the same exponent and different mantissa are separated. I would not recommend it for most purposes.

--

A really simple way that might suit some purposes is to simply use the 8-bit exponent (mask 0x7f80), ignoring the sign bit and mantissa. The values of 00 and ff would have to be handled specially. See http://en.wikipedia.org/wiki/Single-precision_floating-point_format.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top