Question

Usually, when designing a convolutional encoder for a transmitter, some sort of termination mechanism is applied to drive the encoder back to its zero-state after a message was transmitted. This is often done by appending a tail sequence to the transmitted message, e.g. a certain number (n) of zeros in case of a convolutional encoder without feedback. This way it takes n clock cylces to return the encoder to the all zero sate.
On the other hand, e.g. when implementing a convolutional encoder in HDL, this reset to the zero state could also be achieved by simply resetting all (shift)registers of the encoder. That way the zero state could be reached after only one clock cycle.
In the literature I never saw anyone mention the second method and was wondering what the reason for this could be?

Was it helpful?

Solution

If the state of the machine is solely determined by the shift register content, then this is plausible. However, in some efficient shift register implementations the registers cannot be reset to zero - the shift register macro does not have a reset pin. You must flush them with zeros.

So, some of this may be the result of hardware restrictions. I know I've run into this in Xilinx designs with pipelined data.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top