I am having a hard time trying to understand different optimizations for LDPC codes. Most important, I am finding it hard to understand how to compare different codes, given their lengths. For example, MacKay is a relevant author in this field, and co-authored multiples papers in the field, such as
- Davey, Hllatthew C., and David JC MacKay. "Low density parity check codes over GF (q)." Information Theory Workshop, 1998. IEEE, 1998.
You can find the parity-matrices for his codes, as well as other authors, publicly available.
Even though sometimes these matrices share the same rate (Number of rows/number of columns), they are applied to codes with different lengths. This makes it harder to compare them, since most applications would have a block length already defined. In my view, it would be easier to adapt the code to the application, not the other way around.
My question is whether it is possible to resize these matrices, while keeping their coding properties. For instance, if I had a matrix(N x K) with N=1000 and K=500, would it be possible for me to scale it down to N2 = 100 and K2=50.