Frage

I am reading a 10-bit A/D converter on a PIC microcontroller and want to average that value over several thousand calls and have the average end up as a char again. The returned value is a 16 bit char.

I want to do something like:

float f = 0.0;

for (int i=0; i<5000; i++) {
    n = readADC(); // Returns a char, 0->1023
    f = f + (float)n;
}

f = f / 5000.0;
n = (char)f;
// Send n somewhere as the final char which is a mean of 5000 calls to the ADC.

I am getting strange values back as if I'm not casting properly. How can I fix that?

War es hilfreich?

Lösung

If you have a reading of max 1023 (10 bits) and want to average that 5000 times (~13 bits) then it should be pretty clear that a 16-bit accumulator for the averaging is (far) too small.

If the input is stuck at 1023, the sum after 5000 additions will be 5000 * 1023 = 5115000, which clearly does not fit in a 16-bit variable. Not even an unsigned one, which will max out at 65535, more than 78 times too little.

Use uint32_t for the accumulator sum.

Also consider averaging either 4096 or 8192 values, so the division can be done by shifting right either 12 or 13 bits.

Andere Tipps

This cast has no effect; when you use + on a float and a char, the integer value of the char is converted to be the closest float value. C has implicit conversion between these two types.

For example:

 char ch = -5;
 float f = ch;  // f is now -5.0 (or the closest representable value to that)

When you write n=(char)f;: f is truncated towards zero, e.g.:

float f = -44.3;
char ch = f;    // ch is now -44

However, if this value is outside of the range of char, there is a problem. The behaviour is implementation-defined which means your compiler may do whatever it likes, but it must document that. To be safe, you should check something like:

if ( f < CHAR_MIN || f > CHAR_MAX )
    ch = 0;   // and maybe output a message
else
    ch = f;

Regarding your claim about 16-bit chars: Systems with 16-bit chars do exist however they are very specialized, this is why some people are doubting the claim.

To verify this , examine CHAR_BIT defined in limits.h. Alternatively (if your implementation does not provide CHAR_BIT, CHAR_MIN and CHAR_MAX, which is non-conforming btw) do this:

unsigned int u = (unsigned char)-1;

and output the value of u somehow, or perform a test on it. If you get 255 then it is 8-bit; if you get 65535 then it is 16-bit.

Since char is typically only 8 bits you may well get a bad value if your input data is > 127. Change n to unsigned short and change:

n=(char)f;

to:

n=(unsigned short)f;
Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top