What is meant by "finite algorithm" in Turing's definition of the computable numbers?

2k Views Asked by At

In a comment thread on SlateStarCodex, I made a philosophical point the "realness" of the reals, in the process of which I attempted to summarize the definition of Turing-computable numbers:

Essentially, Turing’s definition amounts to “anything that can be defined via a finite algorithm and computed to arbitrary precision.” There are only as many such numbers as there are integers (i.e. they are countable), because every algorithm can be described as a (very long) number.

Another commenter, HeelBearCub, asked for clarification:

What do you mean by “finite”? The integers aren’t finite. You can’t write an algorithm that will enumerate all the integers, so I’m a little confused by the definition.

Isn’t it trivially easy to write an algorithm that can enumerate real numbers to some arbitrary precision? Iterate over over all integers adding/subtracting 0.1 from the resultant sum.

And can’t you then do that same process for every precision, again as defined by the integers? That gets you the very real using only integers.

So it seems like the word finite is doing some work here, but I’m not exactly sure what work.

HeelBearCub said I could post the question and my response here.

1

There are 1 best solutions below

7
On

I believe your confusion isn't actually about the definition of "finite" so much as it is about what we mean by "algorithm" and "enumerate." Also, note that we are applying "finite" to our description of the algorithm, not to the behavior of the algorithm; we don't mean "terminating in a finite amount of time", but "described in a finite number of characters/operations/etc". (Turing's definition is actually that the algorithm has a finite number of "states", but for our purposes it's not worth exploring exactly what that means.)

You can’t write an algorithm that will enumerate all the integers, so I’m a little confused by the definition.

Sure you can! Let's use Python to express our algorithm, and print each number on a new line:

i = 0
while true:
    print i
    if (i <= 0):
        i = -i + 1
    else:
        i = -i

This algorithm will never terminate (assuming infinite time and computing resources--not a good assumption in the real physical universe, of course), but the algorithm itself has been expressed in a finite number of characters.

Isn’t it trivially easy to write an algorithm that can enumerate real numbers to some arbitrary precision?

Sure, if you pick an arbitrary precision (in decimal points), you can enumerate all numbers within this precision range, because there are a finite number of them. And you can iterate this process. But this is not a definition of, much less an enumeration of, all real numbers. It is not even an enumeration of the rationals: for instance, you will create increasingly precise approximations of $\frac{1}{3}$ (e.g. 0.33333333333333333333333333333333333), but you will never actually see the number $\frac{1}{3}$ enumerated in your list. (Nor will you see $\pi$, $e$, etc.)

Turing's model isn't a single algorithm for enumerating all calculable numbers (although that happens to be a side effect, sort of); it's a one-to-one association from algorithms to numbers. In a sense, each algorithm for computing a calculable number is that number, or at least the definition of that number. For instance, the algorithm to calculate $\frac{1}{3}$ is simple: print 0, then print ., then print the digit 3 forever without stopping. (Turing prints the numbers in a different format using only 0's and 1's, but the point holds.) A couple of things are important to note: first, this algorithm represents the entire number to any precision; it will never "finish" printing the number, but if we want to know what $\frac{1}{3}$ is to the $10^{100^{100^{100}}}$th decimal place, we simply need to wait (until long after the universe has stopped existing, etc...this is of course only a thought-experiment!). Second, this is completely different from the algorithm to calculate $\pi$, or even to calculate 0.333333333.

Now, what's the difference between the set of numbers enumerated via Turing's method (the set of all valid programs listed in ascending order of length/complexity/etc--or rather, the numbers corresponding to the (infinitely-long) outputs thereof) and the set of all real numbers? Well, crucially, Cantor proved that you cannot enumerate the real numbers. So (in any particular algorithm schema) there are "real" numbers with no corresponding algorithm for computing them.

(Note that as pointed out by JohnHughes in the comments below, this does not fully resolve the question of whether there might exist some uncountable collection of algorithmic schema that might encompass the entirety of the real numbers.)