Why are constructible numbers strict proper subsets of computable numbers?

131 Views Asked by At

Well, obviously $\pi$ is transcendental, therefore not in the algebraics. Constructibles are clearly a subset of the algebraics so $\pi$ is not constructible. Yet $\pi$ is computable. Thats a perfect counter example as to why the constructibles are a proper subset of the computables.

But Im having a hangup nonetheless. My comprehension of what makes a number computable is, in essence, what is conjured in my imagination by the colloquial term "computer". Maybe thats where my hangup is. Every definition I read, and I admit Im no expert, seems to imply the standard notion of computation, i.e. what computers do. Some definitions explicitly reference computers, algorithms, programs, or Turing machines; others are a bit more vague notions of logical languages. Algorithms and functions arent defined in context, but I suppose they mean a sequence of defined mathematical functions and operations, allowing for reiteration and conditional branching, and terminating in a finite quantity of said operations. Ultimately though, converging with arbitrary precision to the target "computed number" within the reals.

Okay, fine. But how is that different than what is constructible? Euclidean geometery allows me to add, subtract, multiply, divide and take square roots with infinite precision (ideally, of course). But there are more operations here than what a computer can do!

Sure, you can program a computer to implement transcendental functions (but even they are just approximations with arbitrary precision, typically implemented with Taylor series). At the heart of a computer, all you have is addition, implemented in bit logic at an electronic level. But thats literally it! A computer isnt capable of more; everything else is just a cosmetic interpretation of those 1's and 0's stored in memory. All operations - ALL of them - are ultimately implemented using addition (okay, you have bit shifting too but that is essentially just doubling, i.e. adding to itself).

And addition is one of the five basic operations in Euclidean geometry. So how is it then that a computer can resolve more numbers than what is constructible? It seems to me that I should be able to evaluate even more numbers and with better precision than what any real computer can ever do, simply by implementing the same operations one would carry out in a computer but in a Euclidean construction instead, making deliberate choices.

Where is my failure?

1

There are 1 best solutions below

7
On

Briefly, a real number $r$ is constructible if it can be expressed exactly as a combination of some "basic functions" applied to some "basic starting numbers." That "exactly" is crucial here: a process by which we produce constructible numbers closer and closer to $r$ without every reaching $r$ itself is irrelevant to the constructibility of $r$ itself.

By contrast, those limiting processes are exactly what computability is about: $r$ is computable if we can computably find a sequence of rationals which converge to it quickly (usually we require specifically that the distance between the $e$th and $(e+1$)th rational is $<2^{-e}$).

So there's just a fundamental type difference here. It will help to consider a concrete example of a problem which is "geometrically unsolvable but computationally solvable" - for example, trisecting an arbitrary angle. There is no finite sequence of "basic geometric operations" which will produce the desired result exactly, but it is nonetheless computable in the appropriate sense.