Question

Ok, so I am working with a PIC microprocessor, in C. It's a 16F, so it can't hold integers larger than 32bits (unsigned int32 is the largest datasize available)

From a reader, I receive a 5 byte ID code. To transmit it, I have to encoded to BCD, digit by digit. I can't sprint it to a string, as it is larger that the data size, and can't process it. I can't divide it because no operation is defined for it.

I can't figure out any solution possible, does anyone have dealt with this before?

EDIT:

I receive the number in a series of 5 bytes: "FF-FF-FF-FF-FF". I need to convert it to decimal "0123456789012" (13 digits, length of 256^5 in decimal) to send it through RS232. The second function (Take the ASCII, and send it) I already have it working, but I need the string representation of the full number before I can do anything with it.

Was it helpful?

Solution

Assuming you have 32 bit arithmetic: 2**24 = 16777216, so taking x as the most significant 2 bytes and y as the least significant 3:

  (16777216 * x + y) / 1000 
= (16777000 * x + 216 * x + y) / 1000
= 16777 * x + (216 * x + y) / 1000

The first term can be calculated in 32 bits without overflow (since x < 2**16). The second term can also be calculated without overflow (since x < 2**16 and y < 2**24).

This is basically long division in base 2**24 of a 2-digit value, but with terms pre-calculated knowing that the divisor is 1000. One thousand is chosen because it's the least power of 10 greater than 2**8.

So, first compute the lowest three digits, use the fact that (2**32) % 1000 == 296. So this time we'll take x as the highest byte and y as the low 4 bytes

((2**32) * x + y) % 1000 = ((2**32) * x) % 1000 + y % 1000 (modulo 1000)
                         = (296 * x) % 1000 + y % 1000     (modulo 1000)
((2**32) * x + y) % 1000 = ((296 * x) % 1000 + y % 1000) % 1000

Then divide the original number by 1000 using the formula above. Then you're safely into 32 bit territory and can churn out the remaining digits using the normal loop.

Btw, I'd check the results if I were you: I haven't tested this and it's possible I've made an error somewhere. It should be easy to compare against the results of bcd conversions done using the usual means in a 64-bit integer on a PC.

OTHER TIPS

What I would do, is implement addition and multiplication for numbers coded as a string (sort of BigNum). That way, you can sprintf the most significant byte of your ID to a string "A", multiply it with the string "4294967296" (256^4) giving you string "B", sprintf the 4 least significant bytes of your ID in another string "C", and finally adding "B" and "C".

It's not very sexy, especially on a micro-controller, but it works :)

The PIC16F does not have a hardware multiply or divide unit, so unless you are multiplying or dividing by a power of 2, it is taxing to the processor. Here is a routine that does a BCD on a 32 bit number and doesn't require division or multiplication. You could adapt this to a 5 byte number by doing it in chunks.

void BCD32(int32u numIn) { int8u digit = 0;

while (numIn >= 1000000000)
{
    numIn -= 1000000000;
    digit++;
}    
debug[0] = digit + 48;   
digit = 0;
while (numIn >= 100000000)
{
    numIn -= 100000000;
    digit++;
}    
debug[1] = digit + 48;            
digit = 0;
while (numIn >= 10000000)
{
    numIn -= 10000000;
    digit++;
}    
debug[2] = digit + 48;            
digit = 0;
while (numIn >= 1000000)
{
    numIn -= 1000000;
    digit++;
}    
debug[3] = digit + 48;            
digit = 0;
while (numIn >= 100000)
{
    numIn -= 100000;
    digit++;
}    
debug[4] = digit + 48;            
digit = 0;
while (numIn >= 10000)
{
    numIn -= 10000;
    digit++;
}
debug[5] = digit + 48;        
digit = 0;
while (numIn >= 1000)
{
    numIn -= 1000;
    digit++;
}
debug[6] = digit + 48;    
digit = 0;    
while (numIn >= 100)
{
    numIn -= 100;
    digit++;
}
debug[7] = digit + 48;

digit = 0;
while (numIn >= 10)
{
    numIn -= 10;
    digit++;
}
debug[8] = digit + 48;

digit = 0;
while (numIn >= 1)
{
    numIn -= 1;
    digit++;
}
debug[9] = digit + 48;    
debug[10] = CARRIAGE_RETURN;
debug[11] = NEW_LINE_FEED;
SendUart(12);                           

}

You can always "sprintf" it to a string manually yourself. go over the data byte after byte and convert it to a numerical string by appending individual characters.

The core of this problem I would say is one of "long division" to convert to decimal. Not entirely unlike long division you learned in grade school, though with binary numbers, long division is much simpler. But its still a lot of work.

try:
http://mathforum.org/library/drmath/view/55951.html

You will have to implement your own multi-byte subtraction and shift routines.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top