I have also asked this at the EE stack but got no answer.
Before 1990 or so, communications link designers used the concept of computational cut-off rates (instead of Shannon capacity) to estimate what could or could not be done in a dig-com channel. This idea goes back to Wozencraft and later promoted by Massey.
My question is: in the age of turbo codes, what, if any use and/or meaning is there for the computational cut-off rate beyond its very original meaning, namely, it represents nothing more than the maximum rate that a sequential decoder can handle in finite time?