I have a system $y = Gx$ where $x$ is a $N\times1$ vector and $G$ is a $K\times N$ coding matrix, both with coefficients in $GF(2)$); the output $y$ is therefore a $K\times 1$ vector with coefficients in $GF(2)$. This is an overdetermined system ($K > N$.)
I need to understand options for recovering $x$ given the output $y$, as well as something about error-detection.
I am familiar with solving overdetermined linear systems with regular real numbers via least-squares, and am vaguely familiar with Hamming codes --- this is not a Hamming code, and the coefficients of $G$ are accidental rather than designed.
As an example, suppose I have
$$ G = \begin{bmatrix} 1 & 1 & 0 \cr 1 & 1 & 1 \cr 0 & 1 & 1 \cr 1 & 0 & 0 \cr 0 & 0 & 1 \end{bmatrix}$$
and a sample input $x = \begin{bmatrix}1 & 1 & 0\end{bmatrix}^T$. Then $y = \begin{bmatrix}0& 0& 1 & 1 & 0 \end{bmatrix}^T$.
Suppose also I have a one-bit error $w = \begin{bmatrix}0& 0& 1 & 1 & 0 \end{bmatrix}^T$ To go back from $y$ or $w$ to $x$, I would have to find a pseudo-inverse of $G$, so that $\begin{bmatrix}x|s\end{bmatrix}^T = Hy$ with H being a $5\times 5$ matrix and $s$ is two bits of values that could be used for error detection and maybe correction.
I don't think there are enough coding bits to correct a one-bit error, although I guess I could detect it.
How can I go about finding $H$?
Oh, I think I found it, it's just ordinary Gauss-Jordan elimination, which is always well-conditioned in $GF(2)$. So in the example I posted, here are one possible set of steps:
$$ \left[\begin{array}{ccc|ccccc} \boxed{1} & 1 & 0 & 1 & 0 & 0 & 0 & 0\cr 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \cr 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 \cr 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \cr 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \end{array}\right]$$
$$ \left[\begin{array}{ccc|ccccc} 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0\cr 0 & 0 & \boxed{1} & 1 & 1 & 0 & 0 & 0 \cr 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 \cr 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 \cr 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \end{array}\right]$$
$$ \left[\begin{array}{ccc|ccccc} 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0\cr 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \cr 0 & \boxed{1} & 0 & 1 & 1 & 1 & 0 & 0 \cr 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 \cr 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right]$$
$$ \left[\begin{array}{ccc|ccccc} 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0\cr 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \cr 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 \cr 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \cr 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right]$$
$$ \left[\begin{array}{ccc|ccccc} 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0\cr 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 \cr 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \cr 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right]$$
$$ H = \begin{bmatrix} 0 & 1 & 1 & 0 & 0\cr 1 & 1 & 1 & 0 & 0 \cr 1 & 1 & 0 & 0 & 0 \cr 0 & 1 & 1 & 1 & 0 \cr 1 & 1 & 0 & 0 & 1 \end{bmatrix}$$
and I can verify that $HG = I_{5,3}$.
Then $\hat{x} = H(y+e)$ where $e$ is additive error, and $\hat{x}$ is a $5\times 1$ vector with decoded data in the first 3 rows and zeros in the other rows for $e=0$. It looks like if $e \neq 0$, then those other rows are always nonzero, but there is not enough information to correct the errors in this case.