Parity check matrix determines minimal distance of a linear code.

1.8k Views Asked by At

Let $C$ be a linear code with parity check matrix $H$. Then $d(C) = d$ if and only if every set of $d − 1$ columns of $H$ is linearly independent and some set of $d$ columns of $H$ is linearly dependent.

The proof in my script works as follows:

Let $v \in C$ and $i_1, ..., i_u$ be the non-zero components of $v$. Then, the columns $i_1, ..., i_u$ of the parity check matrix must be linear independent. When we choose a vector of weight $d$ for $v$, we receive $d$ linear independent columns of $H$. On the other hand, the condition $u \ge d$ must hold if you can always pick $d-1$ linear independent columns.

I really don't see how this proof works. I understand why the columns $i_1, ..., i_u$ of the parity check matrix must be linear independent when they are the non-zero components of $v$, but the rest of the proof doesn't tell me anything.

2

There are 2 best solutions below

0
On

The rest of the proof states exactly the opposite of the first part. Consider you have a group of $u<d$ columns of $H$, denoted as $i_1....i_u$, and that they're linear dependent. Therefore you have a group of $k$ different columns that adds up to zero, when $k\le u<d$. You can assemble a new code word $v\in C$ with weight $k$, by taking the $k$ indices as the nonzero indices of the code word. Now, take any code word $v_1\in C$, then $v_1 + v \in C$. Hence: $d(v_1+v,v_1) = k < d$.

0
On

The proof seems wrong. It should be "Then, the columns $i_1,...,i_u$ of the parity check matrix must be linearly dependent." Also, one should put apart the null codeword. Corrected:

Let $v \in C$ ,$v\ne 0$, and $i_1, ..., i_u$ be the non-zero components of $v$. Then, the columns $i_1, ..., i_u$ of the parity check matrix must be linear dependent. When we choose a vector of weight $d$ for $v$, we receive $d$ linear dependent columns of $H$. On the other hand, the condition $u \ge d$ must hold if you can always pick $d-1$ linear independent columns.

I agree that it can be confusing. I would rather prove it thus:

First assume $d(C)=d$. Then, there is some codeword $c_0$ with weight $d$. Then, because $c_0 H^t =0$, this implies that $d$ columns of $H$ (those corresponding to the non-zero elements of $c_0$) are LD (linearly dependent). Furthermore, there cannot be a set of $m$ LD columns, with $0<m<d$; because in that case we could write $ x H^t=0$ for some vector $x$ with $m<d$ non-zero elements - but the vectors that verify that equation are the codewords, and there is no (non null) codeword with weight less than $d$.

The other direction goes similarly. If there is some set of $d$ LD columns , but no such set of $0<m<d$ columns, the (non null) solutions of $x H^t$ have minium weight $d$ - but these are the codewords, hence the distance is $d$.