Question

I have been reading up on a custom low-speed communication protocol, which calculates a CRC for the full payload. This payload may be split up into multiple packets. The custom protocol runs on top of an existing bus protocol, which allows for optional per packet CRCs.

So what may happen is:

Pkt 0: S| Pkt Hdr Seq=0 | Start of Payload | Pkt CRC |E

Pkt 1: S| Pkt Hdr Seq=1 | Payload continued | Pkt CRC |E

Pkt 2: S| Pkt Hdr Seq=2 | End of Payload | Payload CRC | Pkt CRC |E

S - Start of Packet; E - End of Packet; Seq - Sequence Number of the Packet

Why would a protocol have its own CRC on the payload it passes, when there is already a packet-level CRC? The payload is already protected. The protocol designers knew about the CRC packet-level option.

The only reasons I can think of are:

  1. The layer passing the payload down to the lower protocol layer does not necessarily know if the lower layer already has a CRC
  2. The layer passing the payload doesn't know if the lower protocol layer's configuration has the CRC enabled.
  3. The layer passing the payload down is using an advanced error checking or correction technique on the payload.
  4. The layer passing the payload down is protecting the payload against potentially bad/flaky lower layers/hardware.

1, 2, & 3 do not apply in this situation. So 4 the only "good" reason I have.

Was it helpful?

Solution

1, 4, and maybe 3.

In protocol stacks it's often important that the layers should be independent from each other. Layers usually have their basic services they provide and they can provide extra features. For example: in ISO/OSI 7 layer you may write an app which communicates through sockets. If you add a checksum to your own application level protocol layer, then you don't have to rely on either the TCP or the UDP layer's error checking below.

So it maybe true that at the current situation you know that the protocol runs on a certain existing BUS protocol. But maybe in the future (let's say 5 years from now) that BUS will be changed to something else. Some kind of I2C, ODB, who knows. Now, usually newer protocols probably provide better error checking, but you don't have to rely on the unknown.

You can observe this in the ISO/OSI layers, more layers have various error checking. It seems to be redundant, but the layer technologies are exchangeable.

OTHER TIPS

A packet can travel multiple (lower-layer ) hops when going from source to destination. Validating checksum at the destination ensures that we did not run into any bit-errors at any of the intermediate hops. Thus, even when we have a packet fragmented into multiple smaller packets, it is the source that does the fragmentation and it is the destination that reassembles the fragmentation. That is why, from checksum point of view, it makes senese to treat each fragment as a separate packet. In fact, this is exactly how fragmentation and checksum is done in IP networks as well. If the packet to send is larger than the MTU, then the sender fragements the packet into multiple smaller packets and calculates checksum for each of them individually. Once the destination receives these fragments, it validates checksum for each one of them before doing a re-assembly. So, I will also go with option 4!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top