Kolmogorov complexity of a product of two numbers

300 Views Asked by At

In his book "Theoretical Computer Science", Juraj Hromkovic informally defines the Kolmogorov complexity $K(x)$ of a word $x$ consisting of zeros and ones as the binary length of the shortest Pascal program that generates $x$. The Kolmogorov complexity $K(n)$ of a natural number $n$ is then defined as the Kolmogorov complexity of the binary representation of $n$.

To show that there's a constant $d$ such that $K(x)\leq|x|+d$ for all words $x$ (where $|x|$ is the length of $x$), he uses the following (pseudo) program $A_x$ to generate $x$:

begin
  write(x);
end

But he notes (in the 2007 German edition of the book, but not in the 2004 English edition) that we would have to prefix the code with two numbers $k$ and $l$ with $k$ being the length of the (constant) "prefix" of the program (up to and including the opening parenthesis) and $l$ being the length of the "suffix" (starting with the closing parenthesis). The reason for this is that we can thus store $x$ verbatim (as a sequence of ones and zeros) while the constant part of the program (stored in ASCII) is the same for each $x$ (and responsible for the constant $d$ above).

So far, so good. I can understand how this is supposed to work, but as he tries to avoid to encode the length of $x$ into the program (as he writes himself) we obviously have to know where the program ends to make sense of $l$. (One could image that one receives $A_x$ as a file.)

But two pages later there's an exercise where he claims that for a positive natural number $n=pq$ we have $K(n)\leq K(p)+K(q)+c$ with a constant $c$ independent of $n$, $p$, and $q$. I think the basic idea here is that $c$ is the length of a program that executes the corresponding programs for the two factors and multiplies the outputs. But if the programs for $p$ and $q$ are programs like above we would need to know how long they are - which we don't. I can easily see how to prove $K(n)\leq 2\max(K(p),K(q))+c$ or $K(n)\leq K(p)+\log_2(K(p))+K(q)+c$ or something similar, but I'm at loss regarding $K(n)\leq K(p)+K(q)+c$. Any ideas?

1

There are 1 best solutions below

0
On BEST ANSWER

I am not familiar with the book you reference, but if what you say about it is true then it contains an error. It appears that the book is conflating two somewhat different concepts: Kolmogorov complexity and prefix free Kolmogorov complexity.

Prefix free Kolmogorov complexity is defined similarly to the usual Kolmgorov complexity, except that you require that the programming language you use has the property that no valid program is a prefix of another valid program. This means that the program

begin
   write(x)
end

would not be considered valid for arbitrary x (consider, for example, what happens if x = ) end).

A standard convention is to use $C(x)$ to denote the usual Kolmogorov complexity (sometimes also called "plain Kolmogorov complexity") of $x$ and $K(x)$ to denote the prefix free Kolmogorov complexity of $x$. The rest of my answer will use this convention.

In general, $C(x)$ does not have to agree with $K(x)$, even up to constant error. Instead, they are only guaranteed to agree up to $O(\log |x|)$ error. (Actually, slightly tighter bounds can be proved, including $K(x) \leq C(x) + \log |x| + O(\log\log x)$, but I don't want to get into those kinds of details. This same caveat applies every time I say $O(\log |x|)$ below.)

Prefix Free Kolmogorov Complexity

It is not the case that $K(x) \leq |x| + O(1)$. Instead, we can only prove $$ K(x) \leq |x| + O(\log |x|). $$

However, we do have the inequality $$ K(\langle x, y \rangle) \leq K(x) + K(y) + O(1). $$ Where I am using the notation $\langle x, y\rangle$ to mean the string coding the pair of $x$ and $y$. It is not too hard to see that this is equivalent to the statement about products of natural numbers that you mentioned.

Plain Kolmogorov Complexity

On the other hand, the program $A_x$ that you mention does show that $C(x) \leq |x| + O(1)$. But it is not the case in general that $$ C(\langle x, y\rangle) \leq C(x) + C(y) + O(1). $$ Instead, it is only possible to prove that $$ C(\langle x, y\rangle) \leq C(x) + C(y) + O(\log (|x| + |y|)). $$

Proof That the $\log$ Term Is Necessary

I will finish by quickly sketching a proof of why the additive $\log$ term in the last inequality is needed. Consider a string $z$ of maximal complexity (i.e. $C(x) \geq |x|$). Let $n = |z|$ and look at the first $(\log n) - 1$ bits of $z$. These bits can be thought of as the binary representation of some number, $m$. Divide $z$ into three parts:

  • $x_1$ is the first $(\log n) - 1$ bits of $z$
  • $x_2$ is the next $m$ bits of $z$
  • $y$ is the remaining bits of $z$

Now let $x$ be the concatenation of $x_1$ and $x_2$ (in other words, $x$ is the first $m + (\log n) - 1$ bits of $x$). The point is just that we can reconstruct all of $x$ from only $x_2$. We do this by reading off the length of $x_2$ (yielding $m$), and then converting it to binary (yielding $x_1$) and then concatenating this with $x_2$ (yielding $x$). Therefore $$ C(x) \leq |x_2| + O(1). $$ But obviously $$ C(y) \leq |y| + O(1). $$ So we have $$ C(x) + C(y) = |x_2| + |y| + O(1) = |z| - ((\log n) - 1) + O(1) $$ even though we chose $z$ so that $C(z) \geq |z|$.