Error Correcting Code

426 Views Asked by At

In my Linear Algebra book I have a chapter about error correcting code.

there is an example involving Redundancy in the form of a check digit :

we have $white=(0,0)$, $red=(1,0)$, $blue=(0,1)$, $yellow=(1,1)$......... $13.13$

Since our digit are elements of $\mathbb{F}_{2}$ we have it $white=(0,0,0)$, $red=(1,0,1)$, $blue=(0,1,1)$, $yellow=(1,1,0)$

I understood what the book has done here. Then they add greater redundancy in the following way ( which I do not understand it ): if $w$ is one of the four pairs of $13.13$, follow $w$ with a check digit and then with $w$ again. Thus,

$(0,0) \rightarrow (0,0,0,0,0),(1,0) \rightarrow (1,0,1,1,0)$

$(0,1) \rightarrow (0,1,1,0,1),(1,1) \rightarrow (1,1,0,1,1)$

would someone please help me out understanding what they have done here.

2

There are 2 best solutions below

0
On

One linear formula they could have applied:

if the colour is $(a,b)$ we make the redundant copy $(a,b,a+b,a,b)$ where we add a new coordinate which is the sum of the previous 2 (and as we work in $\mathbb{F}_2$, $b + a + b = a$ and $a+b + a = b$). So we add literal checksums to the original data.

The original data of length 2 had the property that if one bit flipped, we would get a valid alternative and no way to detect it.

Now if we flip a bit in the sent word $(1,0) \rightarrow (1,0,1,1,0)$ and say the second bit flips, we receive $(1,1,1,1,0)$. This has Hamming distance (the number of differences) distance 1 to a correct possible word and the distance to the other words $(0,0,0,0,0)$ is 4, $(0,1,1,0,1)$ is 3, and $(1,1,0,1,1)$ is 2. So the likeliest sent word was the one with the smallest distance, and we correctly decode to $(1,0)$ despite the one bit flip. Two flips could lead to new errors.

0
On

They seem to be using parity to construct an error correcting code. That is, if you have an odd number of $1$s, then set the special bit called as the "parity bit" to $1$. However, if there are even number of $1$s, then set the parity bit to 0.

For example,

$(0, 0)$ has no $1$. Hence, $(0, 0) \to (0, 0, 0)$ where the third bit is the parity bit.

The general pattern of a message encoding seems to be $$(b_0, b_1, p, b_0, b_1)$$

Where $b_0, b_1$ are the information bits, and $p$ is the parity bit.

$(0, 0)$ has 0 repetitions of 1. hence, $p = 0$ since the number of 1's is even.

$$(0, 0) \to (0, 0, 0, 0, 0)$$

$(1, 1)$ has 2 repetitions of 1, which is even. So, $p=0$ since the number of 1's is even (2 1's).

$$(1, 1) \to (1, 1, 0, 1, 1)$$

$(1, 0)$ has 1 repetition of 1, and is hence odd. Hence, $p = 1$

$$(1, 0) \to (1, 0, 1, 1, 0)$$

One way to think of the value of the parity bit is the XOR for all values in your tuple. Hence, for example, $$p((0, 0)) = 0 \oplus 0 = 0$$ $$p((1, 0)) = 1 \oplus 0 = 1$$ $$p((1, 1)) = 1 \oplus 1 = 1$$

Since you mentioned Field theory, XOR is the same as addition in $\mathbb{F}_2$.