Question

I'm trying to add CRC16 error detection to a Motorola HCS08 microcontroller application. My checksums don't match, though. One online CRC calculator provides both the result I see in my PC program and the result I see on the micro.

It calls the micro's result "XModem" and the PC's result "Kermit."

What is the difference between the way those two ancient protocols specify the use of CRC16?

Was it helpful?

Solution

you can implement 16 bit IBM, CCITT, XModem, Kermit, and CCITT 1D0F using the same basic code base. see http://www.acooke.org/cute/16bitCRCAl0.html which uses code from http://www.barrgroup.com/Embedded-Systems/How-To/CRC-Calculation-C-Code

the following table shows how they differ:

name    polynomial  initial val  reverse byte?  reverse result?  swap result?
CCITT         1021         ffff             no               no            no
XModem        1021         0000             no               no            no
Kermit        1021         0000            yes              yes           yes
CCITT 1D0F    1021         1d0f             no               no            no
IBM           8005         0000            yes              yes            no

where 'reverse byte' means that each byte is bit-reversed before processing; 'reverse result' means that the 16 bit result is bit-reversed after processing; 'swap result' means that the two bytes in the result are swapped after processing.

all the above was validated with test vectors against http://www.lammertbies.nl/comm/info/crc-calculation.html (if that is wrong, we are all lost...).

so, in your particular case, you can convert code for XModem to Kermit by bit-reversing each byte, bit reversing the final result, and then swapping the two bytes in the result.

[i believe, but haven't checked or worked out the details, that reversing each byte is equivalent to reversing the polynomial (plus some extra details). which is why you'll see very different explanations in different places for what is basically the same algorithm.

also, the approach above is not efficient, but is good for testing. if you want efficient the best thing to do is translate the above to lookup-tables.]

edit what i have called CCITT above is documented in the RevEng catalogue as CCITT-FALSE. for more info, see the update to my blog post at the link above.

OTHER TIPS

My recollection (I used to do modem stuff way back when) is that Kermit processes the bits in each byte of the data using the least significant bit first.

Most software CRC implementations (Xmodem, probably) run through the data bytes most significant bit first.

When looking at the library source (download it from http://www.lammertbies.nl/comm/software/index.html) used for the CRC Calculation page you linked to, you'll see that XModem uses CRC16-CCITT, the polynomial for which is:

x^16 + x^12 + x^5 + 1  /* the '^' character here represents exponentition, not xor */

The polynomial is represented by the bitmap (note that bit 16 is implied)

0x1021 == 0001 0000 0010 0001  binary

The Kermit implementation uses:

0x8408 == 1000 0100 0000 1000  binary

which is the same bitmap as XModem's, only reversed.

The text file that accompanies the library also mentions the following difference for Kermit:

Only for CRC-Kermit and CRC-SICK: After all input processing, the one's complement of the CRC is calculated and the two bytes of the CRC are swapped.

So it should probably be easy to modify your CRC routine to match the PC result. Note that the source in the CRC library seems to have a pretty liberal license - it might make sense to use it more or less as is (at least the portions that apply for your application).

X-Modem 1K CRC16.

Process for bytewise CRC-16 using input data {0x01, 0x02} and polynomial 0x1021

  1. Init crc = 0
  2. Handle first input byte 0x01: 2.1 'Xor-in' first input byte 0x01 into MSB(!) of crc: 0000 0000 0000 0000 (crc) 0000 0001 0000 0000 (input byte 0x01 left-shifted by 8)


    0000 0001 0000 0000 = 0x0100 The MSB of this result is our current divident: MSB(0x100) = 0x01. 2.2 So 0x01 is the divident. Get the remainder for divident from our table: crctable16[0x01] = 0x1021. (Well this value is famila from the manual computation above.) Remember the current crc value is 0x0000. Shift out the MSB of current crc and xor it with the current remainder to get the new CRC: 0001 0000 0010 0001 (0x1021) 0000 0000 0000 0000 (CRC 0x0000 left-shifted by 8 = 0x0000)


    0001 0000 0010 0001 = 0x1021 = intermediate crc.

  3. Handle next input byte 0x02: Currently we have intermediate crc = 0x1021 = 0001 0000 0010 0001. 3.1 'Xor-in' input byte 0x02 into MSB(!) of crc: 0001 0000 0010 0001 (crc 0x1021) 0000 0010 0000 0000 (input byte 0x02 left-shifted by 8)


    0001 0010 0010 0001 = 0x1221 The MSB of this result is our current divident: MSB(0x1221) = 0x12. 3.2 So 0x12 is the divident. Get the remainder for divident from our table: crctable16[0x12] = 0x3273. Remember the current crc value is 0x1021. Shift out the MSB of current crc and xor it with the current remainder to get the new CRC: 0011 0010 0111 0011 (0x3273) 0010 0001 0000 0000 (CRC 0x1021 left-shifted by 8 = 0x2100)


    0001 0011 0111 0011 = 0x1373 = final crc.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top