I am reading Sudan's Essential Coding Theory and I am having trouble with chapter 6, about stochastic channels, with regards to how manipulation on the transition matrix $M$ is possible.
Consider a Binary Symmetric Channel (BSC). The probability $p$ is defined as the crossover probability of a letter going into the channel. I want to understand how I can affect the transition matrix.
For example, assume that I have a channel encoder, $E$ ,and channel decoder, $D$, in the system such that I get reliable transmission with $p<\frac{1}{2}$. I want to build a new channel with possibly a new channel encoder, $E'$ and a new channel decoder, $D'$ such that I can create a reliable channel with $p'>\frac{1}{2}$. By the way, this is exercise 6.1 in the book.
From what I understand, I need to show 3 things:
- The new $p'$ should be placed as the crossover probability.
- Transmission should still be reliable.
- The algorithm should still be polynomial w.r.t to the input (this is more relevant to exercise 6.1, as I currently do not understand how can new channels be created from old channels).
So far, I thought of the following:
Since $p<\frac{1}{2}$, I know that $1-p>\frac{1}{2}$, so my initial motivation was to perform a bit-flip to try and create a new channel where $p'=1-p$. That said, I came across the following difficulties:
- I do not understand where and in what component can I put the bit-flip block. Can I only place it in the channel encoder or the channel decoder?
- Where does the channel transmission $M$ take its input and output from? Is it from exactly what goes in and out of the channel or does it stretch to before the channel encoder and after the channel decoder?
- Does reliability mean that I can restore the original message sent before channel encoding, after the channel decoder, with high probability? how can I prove that this property is maintained when composing channels? I suppose I do not need to calculate the probability all over again. Is there any argument that allows me to build on the already existing reliability?
- Conceptually, how does channel composition look like? Does it look like a flow graph with layers, where the layers correspond to the state of the message after each transition (channel encoder -> channel -> channel decoder), where the edges hold the probabilities for the transitions?
If I want to create new transmission schemes from an existing transmission scheme, I need to be able to give units of the scheme, or blocks in the scheme, their own properties. This allows me to take them, plant them in a new scheme, and still make sense of what is going on.
This self-maintained properties, independent of a specific scheme, and inherent to the specific box in the scheme, help create a conceptual framework in which compositions are evaluated for reliable transmission.
Thus, answers:
Here is one illustration of a generalized scheme: It is missing individual properties such as the probabilities mentioned earlier (error in decoding, error in encoding), but these can be added
Here is one illustration of a component in a scheme: Here I posted a BSC channel.
Other boxes should also be similar with respect to routing: There's an input, and then circuitry to represent an operation on the input, and then you get an output. There are many types of boxes that induce change into a signal (for example, permutation block or substitution blocks). With respect to inducing change into a signal, both encoders and decoders perform change, but the change is different when considering the entire transmission scheme. So it is important to remember context when considering boxes or blocks: Are they considered individually? Are they considered as a member of an entire transmission scheme?
Here's another illustration, which attempts to connect the two contexts (individual and collective role on a signal):
But you don't have representation for all the components, as you get in the general generalized scheme: Source or channel decoders and encoders are missing. You cannot always show everything on a single map. That's why you have abstraction. But that is a different discussion.
Recommended material for further investigation:
Books:
Book 1: Gallager
https://www.amazon.com/Information-Theory-Reliable-Robert-Gallager/dp/0471290483
Book 2: Cover
https://www.amazon.com/Elements-Information-Theory-Telecommunications-Processing/dp/0471241954
Book 3: Claude Shannon
https://www.amazon.com/Mathematical-Theory-Communication-Claude-Shannon/dp/0252725484
It is always good to have the possibility to consult the work of a founding father in field, especially with conceptualization and motivation.
Online lectures (from YouTube):
Playlist 1: Thomas Cover (the author of books 2)
https://www.youtube.com/playlist?list=PLzfesPA4ERSWlcKhDUiApOCt2o9WdlRYb
Playlist 2: Form Stanford
https://www.youtube.com/playlist?list=PLv_7iO_xlL0Kz2nU05COpINjU8C0UPICA