Why Convolutional codes is easy to factor/handle the uncertainty of a bit being a 0 or a 1 into the decoding?

cs.stackexchange https://cs.stackexchange.com/questions/126233

Вопрос

Here is an excerpt from Andrew S. Tanenbaum, Computer Networks, 5th edition, Chapter 3 (The data link layer), Page 208: enter image description here

My question is about this part.

[Convolutional codes have been popular in practice because it is easy to factor the uncertainty of a bit being a 0 or a 1 into the decoding.]

Why is it easy to use convolutional codes to factor the uncertainty of a bit?

Is it because the convolutional code circuit is designed in such way as to handle uncertainty properly? I couldn't really find the exact answer. I think the answer has something related to the fact that a convolutional code is not a linear code but I couldn't really understand exactly why a convolutional code is special to handle uncertainty.

Это было полезно?

Решение

An important idea with convolutional codes is that the output sequences of bits (of the convolutional coder) consists of sequences of bits that are correlated with one another, i.e., not independent. Hence, in the receiver side, you know that, given what the convolutional coder is, the sequence of bits can only be some sequences, out of the big set of possibilities. Imagine a big space (say, it may be easier to imagine in 2 or 3 dimensions, then just imagine it gets extended to more dimensions), that has many points in it with integer coordinates, e.g., (1,0,1,1,0), but only only a few of these are allowed possibilities. So you look for the maximum likelihood sequence, which you could think of as the allowed point with the highest probability of having been the input to the convolutional coder.

Having soft-decision decoding, helps when you get to the stage of decoding the sequence of bits, as now, you have the probabilities of each bit being 1 or 0 (by itself, apart from the sequence), which can be fed into the computation of the likelihood of each sequence in the maximum likelihood sequence detection.

A simple example to illustrate: imagine if there are two allowed sequences coming out of the convolutional code: 10 and 01. Now, the receiver gets a sequence that it thinks is 11, but the first 1 only with high certainty (a very likely 1, let's say) whereas the second 1 only with low certainty (maybe 1, if we have to decide between 0 and 1, we say 1). This information helps to decide that the sequence was probably 10 rather than 01. Whereas, with hard decisions, you have 11 and both 10 and 01 are equally likely (e.g., same Hamming distance).

Why is it easy to handle the uncertainty of the bit by bit decisions? This is merely a statement that the decoding algorithms like the Viterbi algorithm, are fundamentally designed to accept the uncertainty values (probabilities) of the bit by bit decisions, as inputs. You could imagine some other algorithms where this is not true. But it is very natural and easy with such maximum likelihood sequence detection algorithms like the Viterbi algorithm.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с cs.stackexchange
scroll top