Why does Turing-computing (being an inconsistent formalism) has undecidable problems?

159 Views Asked by At

I'd like to apply Church-Turing thesis to Kleene-Rosser paradox:

Since untyped lambda-calculus is an inconsistent formalism AND Turing machines are equal in decisive power to lambda-calculus SO We have the same Kleene-Rosser paradox in Turing machine formalism

THEN Since Turing-complete formalism is inevitably inconsistent then everything should be decidable in it

OTOH We have diagonal reasoning which provides for undecidable halting problem

How come?

1

There are 1 best solutions below

1
On

The untyped lambda calculus, as understood today, is not itself a system of logical reasoning, and therefore it makes little sense to declare it to be "an inconsistent formalism" in and of itself.

What the untyped lambda calculus is, is just a term rewriting system: A set of strings that we call terms, plus a reduction relation between those terms. This relation has interesting properties that we can use for various purposes -- but those applications are not themselves a part of the calculus.

Church initially developed the calculus with a particular application in mind, namely a certain scheme for expressing all mathematical reasoning as equational manipulations within the calculus, as an alternative to, say, ordinary first-order logic. Unfortunately it turned out that some of the possible manipulations in the calculus do not correspond to valid mathematical reasoning when interpreted in the way Church envisaged -- the Kleene-Rosser paradox is one early instance of this; Curry and others later found simpler ones -- and that sunk the hope of using "can be expressed in this calculus" as a formalization of "is a valid mathematical argument".

(I don't know the exact details of how this was supposed to work. They are not easy to find nowadays -- because they didn't actually work, they don't get a lot of publicity).

So the particular logical system that Church built using the untyped lambda calculus is unsound -- but that doesn't mean that the underlying rewrite system itself is "inconsistent". That might have been a fair way to describe it if, for example, the paradox implied that the reflexive transitive closure of the reduction relation relates everything to everything else. But that is not so; we have impeccable finitistic proofs that different terms in $\beta$-normal form are not $\beta$-equivalent.

When the Church-Turing thesis is applied to the untyped lambda calculus, it is a different application of it, not tainted by Kleene-Rosser's demonstration that Church's attempt to build a general system of logic failed. Here it just means something like

Let $f$ be an effectively computable partial function $\mathbb N \to \mathbb N$. Then there is a lambda term $M_f$ such that

  • whenever $f(x)=y$ we have $ M_f\, \overline x =_\beta \overline y $, where $\overline x$ means the Church numeral for $x$
  • whenever $f(x)$ is undefined, $ M_f\, \overline x$ is not $\beta$-equivalent to any term in normal form.

Conversely let $M$ be any untyped lambda term, and define $f_M$ by $$ f_M(x)=y \iff M\,\overline x =_\beta \overline y $$ Then $f_M$ is a well-defined partial function $\mathbb N\to\mathbb N$, and is effectively computable.

(or any of a wide variety of other possible coding schemes).

This fact also doesn't mean that a failed attempt to use the lambda calculus as a system of logic casts doubt on the concept of "effectively computable".