Prove that there is a p.c. function $θ$ such that $λx [ μy [ ψ(x, y) = 1 ] ]$ is not p.c.

125 Views Asked by At

Exercise 1.6.28. The partial computable functions are not closed under $μ$. Prove that there is a p.c. function $θ$ such that $λx [ μy [ ψ(x, y) = 1 ] ]$ is not p.c.

Found this little problem that seems so simple which makes it the more interesting that I did not come up with a solution just yet and would appreciate some insight. I was thinking that we can use the s-m-n lemma to build a p.c function based on the condition that some p.c function is equal to some higher than non-computable in the limit real but given the unbound search on the indices we could realize it. Here is what I mean briefly:

$$ \theta(x,y) = \begin{cases} 1 & \begin{array}{l}\text{if $\varphi_{y}(x) = $ some random real with respect to} \\ \text{this particular encoding of programs,}\end{array} \\[0.5em] \mathit{undefined} & \text{otherwise}. \end{cases} $$

What I have in mind is that given the unbounded search, we could make the encoding so that no p.c function will ever hit the real, but the operator would eventually do so. Not sure how correct this is.

1

There are 1 best solutions below

5
On BEST ANSWER

Your goto when asked to show a function isn't partial-computable should be reduction of the halting problem. Here the key insight is that $$\mu y(\theta(x,y) \downarrow = 1) = k$$ implies that $\theta(x,z)\downarrow = 1$ does not hold for any $z<k,$ so we can obtain some information about the halting behavior for inputs smaller than $k$.

So we can consider the partial computable function $\theta(x,y)$ defined by $$ \begin{eqnarray}\theta(x,0) &=& \begin{cases}1& \Phi_x(x)\downarrow\\\uparrow&\Phi_x(x) \uparrow\end{cases} \\\\ \theta(x,y) &=& 1, \;\;\mbox{for } y\ne0. \end{eqnarray}$$

Then we have $$ \mu y(\theta(x,y)\downarrow= 1) = \begin{cases}0& \Phi_x(x)\downarrow\\1&\Phi_x(x)\uparrow\end{cases}$$ which is not computable.


EDIT

I think there is some potential for confusion here, so I'll add a quick note on the notation.

Looking through Soare's book, it's a bit annoying that he never seems to define $\mu$ (though perhaps I missed it). However, from the context of this problem and Theorem 1.5.7 preceding it, it's clear that he means $\mu x\phi(x)$ to be the minimum $x$ satisfying $\phi(x)$ (if such an $x$ exists, else undefined).

This is straightforward enough, but is potentially confusing due to the fact that in the theory of $\mu$-recursive functions, $\mu$ is often defined with a similar, but subtly different meaning. Namely, if $f(x,\vec y)$ is a partial recursive function, we define $\mu xf(x, \vec y)$ as the result of an unbounded search for an $x$ with $f(x, \vec y ) =0$. In other words, it is the least $x$ such $f(x,\vec y) = 0$ and for all $z < x,$ $f(z,y)$ is defined (if such an $x$ exists, else undefined).

This last stipulation that $f(z,y)$ be defined for all $z<x$ is crucial to the conception of of $\mu$ as not merely "minimization", as it is sometimes called, but as a (manifestly computable) unbounded search. Essentially, it enforces the idea that if one of the steps of the unbounded search fails to halt, then so must the whole search. This is the idea Soare is trying to get across (in a more general context than $\mu$-recursive functions) between this exercise and Theorem 1.5.7.

The potential for confusion is made even worse by the fact that some authors, in defining the $\mu$-recursive functions, don't include the stipulation that $f(z,y)$ be defined for all $z<x$ in the definition of $\mu$, but instead fix the closure issue by demanding that $\mu$ only be applied to total functions. This is much worse aesthetically than the other way, in my opinion, but it works and is fairly common.