Number of check bits for single-error-correcting binary linear code

222 Views Asked by At

This is Exercise 23 of the textbook "Abstract Algebra: Theory and Applications" by Thomas W. Judson, 2016; Page 111. Chapter 8 "Algebraic Coding Theory" mainly deals with binary linear code.

Exercise 23: How many check positions are needed for a single error-correcting code with 20 information positions? With 32 information positions?

My attempt: Suppose that the number of check bits is $r$. Then we have $$2^r - 1 \ge r + 20$$ and $$2^r - 1 \ge r + 32$$ respectively. Therefore, $r$ are $5$ and $6$, respectively.

However, the (partial) answer given in the textbook (Page 331) is $r = 6$ for the case of 20 information bits:

For 20 information positions, at least 6 check bits are needed to ensure an error-correcting code.


What is wrong with my argument?
And how to solve this problem (for both cases of 20 and 32 information bits)?

1

There are 1 best solutions below

0
On BEST ANSWER

The answer in the book doesn't seem correct. $5$ parity bits can cover a block of $2^5-1=31$ bits, of which $31-5=26$ are information bits, so it should be more than enough to cover $20$ information bits. This Wiki article explains a popular linear code that achieves maximal efficiency in terms off the number of parity bits needed.

Let's explicitly build Hamming code that covers $20$ information bits.

We have a block of $25$ bits, numbered $b_1 \ldots b_{25}$. $5$ of those bits are parity bits, specifically $b_1, b_2, b_4, b_8, b_{16}$. The rest are information bits.

Expressions for the parity bits are:

\begin{array}{llcl} (1)& b_1 &=& b_3 \oplus b_5 \oplus \ldots \oplus b_{25} \\ (2)& b_2 &=& b_3 \oplus b_6 \oplus b_7 \oplus b_{10} \oplus b_{11} \oplus \ldots \oplus b_{22} \oplus b_{23} \\ (3)& b_4 &=& b_5 \oplus b_6 \oplus b_7 \oplus b_{12} \oplus b_{13} \oplus b_{14} \oplus b_{15} \oplus b_{20} \oplus b_{21} \oplus b_{22} \oplus b_{23} \\ (4)& b_8 &=& b_9 \oplus b_{10} \oplus \ldots \oplus b_{15} \oplus b_{24} \oplus b_{25} \\ (5)& b_{16} &=& b_{17} \oplus b_{18} \oplus \ldots \oplus b_{25} \end{array}

It is easy to see that each bit participates in a unique subset of these expressions. Specifically, bit $b_k$ participates in line $i$ iff the $i$th digit in the binary representation of $k$ is $1$.

Now if we have a possibly corrupt block $b'_1 \ldots b'_{25}$ where exactly one $b'_i$ is different from $b_i$, some of the equalities above will not hold. If we write down $0$ for each equality that holds and $1$ for each one that doesn't, we get exactly the number of the flipped bit, in binary. If all hold, there is no error.