Why $C(n\mid l(n)) \ge C(n) - C(l(n))$ for Kolmogorov complexity

62 Views Asked by At

Denote by $C(n)$ the plain Kolmogorov complexity of $n$ and the length of a binary encoding of $n$ by $l(n)$, why do we have $$ C(n\mid l(n)) \ge C(n) - C(l(n))? $$ If I have a shortest program $p$ for $n$ given $l(n)$ and a shortest program $q$ for $l(n)$, then to compose out of them a program for $n$ (i.e. first compute $l(n)$ and then use this and $p$ to compute $n$) I need some way to tell them apart, thereby I need to put additional information into a program for $n$ using the programs $q$ and $p$, hence I do not have just $C(n) \le C(n\mid l(n)) + C(l(n))$, but this is written in my textbook as holding "clearly"? Or does this just hold if $C(n) \ge l(n)$?

1

There are 1 best solutions below

5
On BEST ANSWER

In Example 2.2.5 of Li and Vitanyi, they say that they are looking at a string such that the "pattern of 0s and 1s is very irregular, $C(n) \geq l(n)$". What I believe they mean (added part in italics) is that they are looking at a string such that "the pattern of 0s and 1s is very irregular, that is, $C(n) \geq l(n)$". The final sentence of their example is only intended to apply to that case. The prose in that example is challenging for me to read, overall.

However, as we discussed in the comments, I don't see immediately how to fill in the argument in the book. The overall conclusion seems to be salvageable, as in the comments.