What is the decoding algorithm for Polar Code (LDPC error corrections)?

48 Views Asked by At

I am having trouble implementing a decoder for the Polar channel code (by Erdal Arikan).

The encoding is done as follows:

We take the example of a source input being the 8-bit string [0,1,0,1,1,0,0,1]

Next, we expand this into a 16-bit string by setting zeros to the "bad-channel" positions number 1,2,3,4,5,6,9,10, to get the following string: [0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,1]

Next, we apply the Polar transform matrix, for which the G16 entries are as follows:

enter image description here

Applying this matrix operator to our 16-bit input vector gives the new vector [0,1,1,1,1,0,0,0,1,0,0,0,0,1,1,1]

This vector is now transmitted over some channel. We assume there is some degradation over the channel, and the received data by the receiver is as follows: [0,1,1,1,1,0,0,0,1,0,0,0,0,1,?,1]

We see that the 15-th entry in the received data is now erased, symbolized by "?".

My question is, what is the concrete step-by-step algorithm to recover this 15-th erased bit according to Arikan's Polar code?

Please help to show the step-by-step until the original bit information is logically recovered for this example. I have come across many explanations about the abstract information theoretic aspects of this Polar code, but I am having trouble finding the real concrete algorithm to decode this degraded "?" bit in this particular example, please.

Thank you for any help/pointers on this.