In the class, I learned a relationship between the weight of the error vector and the weight of the syndrome vector. But I don't know how to verify this.
Let $\mathbf{H} = (\mathbf{A} | \mathbf{I}_{(n−k)×(n−k)})$ be a parity-check matrix of an $[n,k,2t+1]$ binary linear code $\mathbb C$ that is used for data transmission over a $BSC(\epsilon)$, $0 \leq \epsilon < 1/2$.
Assume that $\mathbf y$ was received from the channel and that the syndrome $\mathbf{s}^T=\mathbf{H}\cdot \mathbf{y}^T$ is such that $w_H(\mathbf{s}) \leq t$. It is claimed that the only possible error pattern $\mathbf e$ with $w_H(\mathbf e) \leq t$ is $\mathbf{e} = (\mathbf{0}^k|\mathbf{s})$, where $\mathbf{0}^k$ denotes the all-zero vector of length $k$.
It is easy to show that with such choice $\mathbf{e} = (\mathbf{0}^k|\mathbf{s})$, the weight of $w_h(\mathbf e)$ is less than $t$. But how to show it is the only possible error pattern? I think it may be related to the dependence of columns of the $\mathbf H$, where we have that $\mathbb C$ has minimum distance $2t+1$ if and only if any subset of $2t$ columns of $\mathbf H$ are linearly independent and there exists a subset of $2t+1$ columns of $\mathbf H$ are linearly dependent.
Any help would be appreciated!