I am following Epstein & Carnielli's book on Computability and am puzzled by their reasoning: on p. 131, Theorem 2, they define a universal computation predicate C(n,b,r,q) by induction on n. I have trouble understanding why this predicate as defined is primitive recursive. They argue as follows: "First, note that C is primitive recursive since every condition is obtained by bounded existence on some primitive recursive condition".
I do not understand this reasoning, even though it appears to be a rather trivial observation. What they said is true but I think it only shows that the predicate C(n,-,-,-) for each fixed n is primitive recursive, not that C is primitive recursive as a whole.
In fact, why is this so? They define C by induction on n, so in the definition of C(n+1,u,v,w) they are free to use the values C(n,x,y,z) for any x,y,z, even values larger than u,v,w, respectively. It is not at all obvious to me that this results in a primitive recursive predicate, and even if so, how it is justified by the reasoning in the book. Can someone please explain?
Thank you.
First, as an aside, it is easily possible for one argument to get arbitrarily large while primitive recursion is done on another argument. For example, in $$ f(0,x) = x\\ f(n+1, x) = f(n, x^2) $$ the second argument only gets larger as $n$ gets smaller. We could replace $x^2$ with any primitive recursive function.
Back to the proof in the book. It is indeed a little confusing to see how the function that they construct is primitive recursive based on their argument. I'd also be interested in an answer that explains that.
There is an alternate way to phrase the proof that is a little easier in my opinion. This at least shows that the theorem is correct.
First, remember that the role of $q$ in $C(n,b,r,q)$ is to represent the number of steps required for a particular computation to halt. Therefore, rather than defining the function by primitive recursion on $n$, it is possible to define it by primitive recursion on $q$.
That means that we only need to show that, if we know the complete state of the computation after $q$ steps, we can compute the complete state after $q+1$ steps using a primitive recursive function. This can be shown, tediously, by using Gödel numbering methods to represent the compete state as a natural number, in a way that these states can be manipulated primitive recursively.
We then make a helper function $\hat C(n,b,q)$ which produces a sequence $\langle m_1, m_2, \ldots, m_q\rangle$ of natural numbers. Each of these numbers is a code for the entire computational state that we would have if we had simulated the evaluation of $\phi_n(b)$ for a given number of steps, from 1 to $q$ steps.
Finally, we define $C(n,b,r,q)$ to hold if the final element of the sequence $\hat C(n,b,q)$ shows that the computation halted at that point with value $r$. This is primitive recursive, because it only requires extracting the final element from a sequence and examining it to see if the computation has halted with the correct value.