Question

im Converting a uint32_t hex number into uint32_t BCD number. and will do the same to uint64 hex to BCD...

i have this from a uint16

uint16_t CvtBcd(uint16_t HexNumber)
{
  return ((HexNumber/ 10) << 4) | (HexNumber% 10); 
}

edit:

I'm going to use it as an external code to be used for a bigger program.

// Converts a uint32_t hex number into uint32_t BCD number.
extern uint32_t Cvt32Bcd(uint32_t HexNumber)
{
  return ((HexNumber/10) << 8 | (HexNumber % 10));
}
Was it helpful?

Solution

In a binary coded decimal representation, you need four bits per decimal digit, thus you cannot convert all (unsigned) integers to a BCD representation of the same size, usually you need a larger type for that. Ignoring that problem, an algorithm to convert an unsigned integer of whatever size to its BCD representation is

uintN_t convert_to_BCD(uintN_t n) {
    uintN_t bcd = 0;
    int shift = 0;
    while(n) {
        bcd |= (n % 10) << shift;
        n /= 10;
        shift += 4;
    }
    return bcd;
}

To avoid overflow, make the return type larger than the argument type, but that doesn't work for the largest available input type, of course. You could then use an array of unsigned char to hold the digits, or a struct containing two uintN_ts to hold the result.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top