I was reading a paper about a transposition and single deletion error correcting code and they claim that the redundancy of the code was only $log(6n-3)$ bits.
But what does that means? I was trying to get that from the proof of that fact but what they proved instead was that
there exists a such (with the structure they propose) code whose redundancy is at most $log(6n-3)$ bits
and in the proof itself they just said that the cardinality of the code is greater or equal than $\frac{2^n}{6n-3}$.
How does that proved the hiposesis?
Also, if I have my own code how do I compute the redundancy (or a resonable bound on it?)
Thanks
It takes $\log_2 M$ bits to uniquely specify a member of a set of cardinality $M.$
The collection of all binary vectors of length $n$ clearly requires all of these $n$ bits. It has redundancy zero, where redundancy is defined as $$R=n-\log_2 M.$$
Thus redundancy at most $\log_2 (6n-3)$ means $$ n-\log_2 \# C \leq \log_2 (6n-3), $$ or $$ \log_2(2^n)-\log_2(6n-3) \leq \log_2\#C, $$ leading to $$ \frac{2^n}{6n-3} \leq \#C. $$