In my Linear Algebra book I have a chapter about error correcting code.
there is an example involving Redundancy in the form of a check digit :
we have $white=(0,0)$, $red=(1,0)$, $blue=(0,1)$, $yellow=(1,1)$......... $13.13$
Since our digit are elements of $\mathbb{F}_{2}$ we have it $white=(0,0,0)$, $red=(1,0,1)$, $blue=(0,1,1)$, $yellow=(1,1,0)$
I understood what the book has done here. Then they add greater redundancy in the following way ( which I do not understand it ): if $w$ is one of the four pairs of $13.13$, follow $w$ with a check digit and then with $w$ again. Thus,
$(0,0) \rightarrow (0,0,0,0,0),(1,0) \rightarrow (1,0,1,1,0)$
$(0,1) \rightarrow (0,1,1,0,1),(1,1) \rightarrow (1,1,0,1,1)$
would someone please help me out understanding what they have done here.
One linear formula they could have applied:
if the colour is $(a,b)$ we make the redundant copy $(a,b,a+b,a,b)$ where we add a new coordinate which is the sum of the previous 2 (and as we work in $\mathbb{F}_2$, $b + a + b = a$ and $a+b + a = b$). So we add literal checksums to the original data.
The original data of length 2 had the property that if one bit flipped, we would get a valid alternative and no way to detect it.
Now if we flip a bit in the sent word $(1,0) \rightarrow (1,0,1,1,0)$ and say the second bit flips, we receive $(1,1,1,1,0)$. This has Hamming distance (the number of differences) distance 1 to a correct possible word and the distance to the other words $(0,0,0,0,0)$ is 4, $(0,1,1,0,1)$ is 3, and $(1,1,0,1,1)$ is 2. So the likeliest sent word was the one with the smallest distance, and we correctly decode to $(1,0)$ despite the one bit flip. Two flips could lead to new errors.