Overbeck et al. state in "Code-based Cryptography" from 2009:
A syndrome decoding algorithm takes as input a syndrome -- not a codeword. Each syndrome decoding algorithm leads immediately to an decoding algorithm and vice versa.
I have two questions about this:
- Why does syndrome decoding work? Aren't there generally more error vectors $e$ than syndromes $He$ (for a parity check matrix $H$ with dimensions $(n - k) \times n$? How can syndromes be uniquely decoded into error vectors if that is true?
- How can a decoding algorithm (which computes the closest codeword to a given message) be used to construct a syndrome decoding algorithm?
I understand that the opposite is simple, given a syndrome decoding algorithm one can construct a decoding algorithm by first computing the syndrome of the received message, applying the syndrome decoding algorithm and subtracting the resulting error vector from the received message.
Yes. The equation $s = r H = e H$ has, for a given (computed) syndrome $s$ many solutions in terms of $e$ - namely, as many as codewords: $2^k$. Indeed, if $e_1$ is a solution, then also $e_1 + c_j$ (where $c_j$ is any non-null codeword) is a solution.
They cannot be "uniquely decoded", if by that you mean without uncertainty. We just choose, among the set of $2^k$ possible error patterns (what we call the coset), the most probable one (what we call the coset leader), because in that way we minimize (but don't preclude) the probability of wrong decoding. Which is the most probable $e$, depends on the statistics of the channel - typically (namely, for a BSC with $p<0.5$) that corresponds to the error pattern with fewest ones (minimal weight) - which also corresponds to choosing the closest codeword to the received message.