Question

I need to convert an 8 bit number (0 - 255 or #0 - #FF) to its 12 bit equivalent (0 - 4095 or #0 - #FFF)

I am not wanting to do just a straight conversion of the same number. I am wanting to represent the same scale, but in 12 bits.

For example:-

0xFF in 8 bits should convert to 0xFFF in 12 bits

0x0 in 8 bits should convert to 0x0 in 12 bits

0x7F in 8 bits should convert to 0x7FF in 12 bits

0x24 in 8 bit should convert to 0x249 in 12 bits

Are there any specific algorithms or techniques that I should be using?

I am coding in C

Was it helpful?

Solution

Try x << 4 | x >> 4.

This has been updated by the OP, changed from x << 4 + x >> 4

OTHER TIPS

If you are able to go through a larger domain then this may help:

b = a * ((1 << 12) - 1) / ((1 << 8) - 1) 

It is ugly but preserves scaling almost as requested. Of course you can put constants.

What about:

x = x ?((x + 1) << 4) - 1 :0

I use mathematical equation y=mx+c Assuming low range of values is zero. You can scale your data by a factor of m (Multiple for increasing range and divide for decreasing) Ex. My ADC data was 12 bit. Range in integer =0 to 4095 I want to shrink this data in range 0 to 255.

m=(y2-y1/x2-x1)
m=(4095-0/255-0)
m=16.05 = 16

So data received in 12 bits is divided by 16 to convert to 8 bits. This conversion is linear in nature.

Hope this is also a good idea. Image Link

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top