Shannon limit vs Hamming bound

248 Views Asked by At

Under the Binary Symmetric Channel, the Shannon limit can be stated as:

$R \le 1-H(p)$

where $R$ is the rate of a code that can be decoded with high probability, $p$ is the probability of bit-flips on the channel, and $H$ is the binary entropy function. (I believe all of this applies in general, but I'm stating it in terms of binary alphabets for simplicity.) This is suspiciously similar to the Hamming bound:

$R \le 1-H(\delta/2)$

where $\delta$ is the relative distance of the code. And indeed, when $\delta \ge 2p$ then obviously the code will be able to correct all errors with high probability. Naively, the converse feels like it should also be true: That to correct all errors with high probability, $p \le \delta / 2$. And indeed, this is claimed in these lecture notes:

Proposition 2.1. Let $0 ≤ p < \frac{1}{2}$ and $0 < ε ≤ \frac{1}{2} − p$. If an algorithm $A$ can handle $p + ε$ fraction of worst case errors, then it can be used for reliable communication over $BSC_p$.

... proof snipped ...

Remark 2.2. The converse of Proposition 2.1 is also true. More precisely, if the decoding error probability is exponentially small for the BSC, then the corresponding code must have constant relative distance. The proof of this claim is left as an exercise.

However, that leads to a contradiction: The Shannon limit is tight, but there are better upper bounds than the Hamming bound, such as the Elias-Bassalygo bound and the MRRW bound. These bounds rule out the existence of codes that, by Remark 2.2 would be required to achieve the tightness of the Shannon limit.

I know I'm missing something important here, but what is it?

1

There are 1 best solutions below

2
On

This might help. The Shannon limit places no constraints on computational complexity, nor on the type of coding/decoding. And it is proved by using random coding ideas, which I am sure you know.

Codes are traditionally used for unique decoding while it is possible to do list decoding, i.e., let the decoder output a small list where the correct codeword belongs to one element of the list. See the notes below in the course by Madhu Sudan, for example.

http://people.seas.harvard.edu/~madhusudan/courses/Spring2020/scribe/lect05.pdf

There it is explained how the Elias-Bassalygo bound applies if one allows "small list decoding".