In High School Computing I was taught the Structured Program Theorem - that you could implement any mathematical operation using:
- Sequence
- Selection
- Iteration
After completing a Computer Science degree - we can express what is required for any computable function more formally with M-recursive functions:
- The constant $0$ function
- The successor function
- Selecting parameters
- Function composition
- Primitive Recursion
- The $\mu$-operator (look for the smallest $x$ such that...)
This being the minimal set of axioms. Translating this to code we get:
- The constant
0 - Incrementation
_ + 1 - Variable access
x - Program/statement concatenation
_; _ - Countdown loops
for ( x to 0 ) do _ end - While loops
while ( x != 0 ) do _ end
But I've come away looking for proof. How do we know all this covers all computable functions? Is there an obvious branch of Mathematics for which this is not covered? Is there a shortcoming in Lambda Calculus where it is yet to cover off obscure Mathematical operations?
(Noting of course that
- Lambda Calculus is a subset of Term rewriting,
- that Mathematica chose Term-writing over Lambda Calculus as its foundation,
- and recent work in Univalent Foundations. )
My question is: Can all mathematical operations be encoded with a Turing Complete language?
No.
Or, as it would usually expressed more positively: It is possible to mathematically define things that are not computable.
For example we can define the function $$ f(M) = \begin{cases} 1 & \text{if $M$ is a lambda term that has a normal form} \\ 0 & \text{otherwise} \end{cases} $$ which is definitely not computable -- it's essentially the halting problem.
Defining this function symbolically in all its gory detail is not particularly enlightening. It can be done, however, and the crucial point that makes it non-computable is that we need to express an unbounded quantification over all natural numbers, in order to say "there is (or isn't) a reduction sequence of some finite length that leads to a normal form".
In terms of formalism, we're almost there with the minimization operator. But it is usually specified that $\mu x.E$ fails to terminate if there is no suitable $x$. In order to express mathematical definitions in general we need a total minimization operator $\bar\mu$ which cannot be expressed computably: $$ \bar\mu x.E(x) = \begin{cases} \text{the smallest $x\in\mathbb N$ such that $E(x)=0$} & \text{if such an $x$ exists} \\ 0 & \text{otherwise} \end{cases} $$ If we had this operator we, would be able to express every function that is definable in first-order logic. But it cannot be included in an actual programming language, because there would be no general way for an implementation to decide to return $0$ other than checking infinitely many cases.
This is one instance of Church-Turing's thesis. Until we have a precise definition of "computable function", it can be said to be a matter of faith -- but it is a practical experience that all attempts to define computable functions have turned out to lead to equivalent concepts, at least those of them that didn't turn out to be considered insufficiently expressive.
In particular, nobody has proposed a computing mechanism that appears to be practically realizable and cannot be expressed in (any given one of) those formalisms. So usually we just take our favorite formalism (they are all proved to be equivalent with impeccable rigor) to be the definition of "computable".