Convolutional coding is often applied in wireless transmission systems since it allows for economical maximum likelihood soft decision decoding in contrast to older channel codings. While I intitively see a qualitative advantage of this, I am not able to formulate a quantitative statement about the benefit and also the statements I can find are very vague. For example:
Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of t applies to a quantity of errors located relatively near to each other. That is, multiple groups of t errors can usually be fixed when they are relatively far apart. Wikipedia
Is this all we can get? Is modern coding theory just trial-and-error? If not, it should be possible to formulate any qualitative statement about the following concrete example.
Exemplary Comparison
Let's consider the following transmission chain. We have a stream of bits with a given bit rate (e.g. $1\,\text{Mb/s}$). Then we apply a coding as given below to transfer the stream of bits to a stream of chips (just a placeholder term to avoid the confusion of bits and bits). These chips are then modulated and transmitted over a wireless channel. At the receiver, we have a demodulator that reconstructs the stream of chips. Due to the transmission and depending on the modulation some chips are erroneously flipped. Then decoding is applied to transfer the stream of chips back to a stream of bits. Depending on the errors in the stream of chips and the coding we now have some errors in the bits.
As a simple example and to explain what I want to achieve, let's consider a $n=3$ repetition code. Assuming that the probability for a chip error is $p$ and that they are independent and identically distributed. Then after decoding (also see here) the probability for a bit error is \begin{equation*} {3 \choose 2} p^2 (1-p)^1 + {3 \choose 3} p^3 = 3p^2-2p^3. \end{equation*}
- Interposed question: Is this correct and is the statement of iid chip errors equivalent to saying that we have a memoryless AWGN channel?
In the same scenario let's consider three other codings.
1. A convolutional code with the following generator polynomials¹
\begin{equation*} G_0(x) = 1+x+x^2+x^3\\ G_1(x) = 1+x^2+x^3 \end{equation*}
Afterward, with a_0 being the result of the first polynomial and a_1 being the one of the second, the following mapping is applied resulting in 8 chips:
| a_0 | a_1 | output |
|-----|-----|----------|
| 0 | 0 | 00110011 |
| 1 | 0 | 11000011 |
| 0 | 1 | 00111100 |
| 1 | 1 | 11001100 |
2. A mapping of 4 consecutive bits to 32 chips according to the following table²
| input | output |
|--------|----------------------------------|
| 0000 | 11011001110000110101001000101110 |
| 0001 | 11101101100111000011010100100010 |
| 0010 | 00101110110110011100001101010010 |
| 0011 | 00100010111011011001110000110101 |
| 0100 | 01010010001011101101100111000011 |
| 0101 | 00110101001000101110110110011100 |
| 0110 | 11000011010100100010111011011001 |
| 0111 | 10011100001101010010001011101101 |
| 1000 | 10001100100101100000011101111011 |
| 1001 | 10111000110010010110000001110111 |
| 1010 | 01111011100011001001011000000111 |
| 1011 | 01110111101110001100100101100000 |
| 1100 | 00000111011110111000110010010110 |
| 1101 | 01100000011101111011100011001001 |
| 1110 | 10010110000001110111101110001100 |
| 1111 | 11001001011000000111011110111000 |
3. A trivial repetition code
| input | output |
|--------|----------|
| 0 | 00000000 |
| 1 | 11111111 |
In the end, for all methods, the chip rate is 8 times the bit rate.
- Still assuming iid chip errors and demodulation with a hard decision for chips that either are 0 or 1 before the decoding, does the probability of an error in the final bit stream depend on the coding?
- What changes if we apply a more realistic channel model and/or soft-decision decoders and how can one quantify the resulting difference of the methods?
- Is the ability to perform soft decision the only benefit of convolutional coding, even compared to trivial repetition codes?
¹LE Coded PHY with S=8 in Bluetooth 5, ²IEEE 802.15.4 DSSS.
Let's take a simple example. Say you are sending $n$ bits. Each of them is repeated $3$ times. Ok $3n$ bits now and so divide it into $k$ chips per sec.Let's say some errors happen here. You are transmitting finally $k$ chips. The channel corrupts those bits and for each chip the final probability is $p$ including the combined modulation, demodulation and channel errors.They are independent so shouldn't $p^k$ be the combined probability? The link you mentioned says "$\epsilon$ is the error over the transmission channel" which doesn't cover the coding/decoding and modulation/demodulation method errors.
So
$Answer$: I'd say No. It is more likely that the transmission caused the bit flip than the coding method which is most likely local and easily error corrected.
$Answer$: This shouldn't matter at all if you know $p$ which is the final probability at the receiving end.