Question

I am trying to tidy up this function which converts a 10bit value to 6 bit. I also need to be able to define the bit length of the input for when I use a higher resolution ADC:

BYTE ioGetADC (void)                
 {
  BYTE r;

  ConvertADC();              // Start Conversion
  while(BusyADC());              // Wait for completion
   {
    r = ( (ReadADC())/16);          // Read result and convert to 0-63 (returns 10bit right hand justified)
   }

  return r;
 }
Était-ce utile?

La solution

Building on Joachim's answer, how about:

uint32_t dropBits(uint32_t x, uint8_t bitsIn, uint8_t bitsOut)
{
  return x / (1 << (bitsOut - bitsIn));
}

so, for instance if we call dropBits(1023, 10, 6) to scale the maximum value of a 10-bit integer into 6 bits, it will return 1023 / (1 << 4) which is 1023 / 16 i.e. 63, the maximum for a 6-bit value.

Of course, we can be tempted to help the compiler out since the denominator is a power of two:

return x >> (bitsOut - bitsIn);

This removes the division operator, doing it with a shift directly instead.

Note that this can only drop bits, it can't scale values into more bits.

Autres conseils

The divisor you need is the difference between the input and output bits, raised to the power of two.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top