brute force decoding of reed muller

654 Views Asked by At

I'll be teaching reed muller decoding shortly. Now I'm asking myself: in the case of rm(1,5) why not simply try out all 64 codewords and decode to the one with the least distance. In other words. Can I justify majority logic decoding on practical grounds or is it more a matter of mathematical beauty?

1

There are 1 best solutions below

0
On

The answer to your question depends on your definition of Reed-Muller codes and what is meant by "decoding". The canonical definition of Reed-Muller codes is as nonsystematic codes (see e.g. the first part of this answer of mine) in which case it is not sufficient to just determine which codeword $\hat{\mathbf c}$ is most likely to have been the cause of the received vector $\mathbf r$; one must then process $\hat{\mathbf c}$ in order to determine the information bits that were transmitted. As a result, in general, majority-logic decoding of Reed-Muller codes is both the most practical way of decoding Reed-Muller codes as well as the most beautiful, because the algorithm produces the information bits directly from without $\mathbf r$ without ever explicitly calculating $\hat{\mathbf c}$.

Let us then consider how maximum-likelihood decoding of a $RM(1,n)$ code via comparing $\mathbf r$ to each of the $2^{n+1}$ codewords stacks up against a canonical Reed-Muller decoder.

  • Comparison decoder: We have to determine the Hamming distance between $\mathbf r$ and each codeword. One way to do this is to compute $\mathbf r \oplus \mathbf c$ (2^n XOR operations for each $\mathbf c$) and then counting the ONEs in the sum. Ignoring the complexity of this counting, we note that a total of $2^{2n+1}$ XOR operations are required. Then, when we have the $2^{n+1}$ distances computed, we have to find the codeword $\hat{\mathbf c}$ that is nearest to $\mathbf r$. So, more comparisons etc. are needed. But we are not done as yet: we have to determine the information bits from $\hat{\mathbf c}$ and more calculations are needed.
  • Canonical decoder: The $2^{n-1}$ checks that vote on each of the $n$ "degree-1" information bits are computed using one XOR each for a total of $2^{n-1}\times n$ XOR operations. We need to count how many of the $2^{n-1}$ checks are ONEs but that is only half as much work as counting the number of ONEs in $2^n$ bits, and it needs to be done only $n$ times instead of $2^{n+1}$ times. Having determined the $n$ "degree-1" information bits, we have to find the corresponding "codeword" and subtract it from $\mathbf r$. It might appear that the computation of the codeword would need $(n-1)\times 2^n$ XOR sums, but if we compute the sums in Gray code order rather than natural binary counting order, we can manage with just $2^n$ XORs!! In any case, since the "compare to all codewords" method does require storage of all $2^{n+1}$ codewords, we might consider just storing the $2^n$ codewords and use the $n$ "degree-1" information bits as an address to be used in a lookup table! Finally, we have to count the number of ONEs in whatever is left to determine the "degree-0" information bit.

To my mind, at least, the canonical decoding algorithm is far superior to the "compare to all codewords" by any criterion that one might choose to make the comparison.


That being said, there is a decoding method for the $RM(1,n)$ codes effectively implements the "compare to all codewords" method, and is not only quite efficient but can also be applied to decode "soft-decision" outputs. If $\mathbf r \in \mathbb Z_2^{2^n}$ is the received vector, create a vector $\mathbf x \in \{+1, -1\}^{2^n}$ by setting $x_i = (-1)^{r_i}$. Let $H_n$ denote the $2^n\times 2^n$ Hadamard matrix in Sylvester form, that is, $$H_n = \left[\begin{matrix}H_{n-1} & H_{n-1}\\ H_{n-1} & -H_{n-1}\end{matrix}\right]; \qquad H_1 = \left[\begin{matrix}+1 & +1\\ +1 & -1\end{matrix}\right].$$ Note that the rows of $H_n$ are the "degree-1" codewords in the $RM(1,n)$ code translated from the $\{0,1\}$ alphabet to the $\{+1, -1\}$ alphabet. Then, $\mathbf y = \mathbf xH$ is a vector whose $k$-th entry has value $2^n -d_k$ where $d_k$ is the Hamming distance between $\mathbf r$ and the $k$-th codeword, $0 \leq k \leq 2^n-1$. The decoding algorithm then is to compute $\mathbf y$ and to determine $$D = \operatorname{argmax} |y_k|.$$ If the standard binary representation of $D$ is $$D = \sum_{i=1}^n D_i 2^{I-1},$$ then $(D_n, D_{n-1}, \cdots, D_1)$ are the $n$ "degree-1" information bits while $D_0 = \frac{1-\sgn y_D}{2}$ is the degree-0 information bit. All this is fine and dandy but the real point is that there exists a Fast Hadamard Transform algorithm (very similar to the radix-2 Fast Fourier transform algorithm) that reduces the computational effort of finding $\mathbf y$ from $(2^n)^2$ multiplications and additions to $n2^n$ operations, and this Fast Hadamard Transform algorithm makes this decoder eminently practical and efficient. It was implemented over 45 years ago in the Mariner missions to Mars. See this answer of mine on mathoverflow for some more details and references.