Understanding the definition of primitive recursion.

213 Views Asked by At

I'm a little (truthfully really) lost with the definition of primitive recursion. Here is the definition for primitive recursive functions that we have in our course (translated from German):

The class of primitive recursive functions is the smallest class of functions of $\mathbb{N}^l \to \mathbb{N}, l \geq 0$, which:

  1. contains the following basic functions:

    a. The constant function $f : \mathbb{N}^k \to \mathbb{N}, f(n_1, ..., n_k) = c$;

    b. The projection $\pi_i^k : \mathbb{N}^k \to \mathbb{N}, \pi_i^k(n_1, ..., n_k) = n_i$;

    c. The successor function $\text{succ}:\mathbb{N} \to \mathbb{N}, \text{succ}(n) = n+1$;

  2. And is closed under the following operations:

    a. Composition: If $f_1, ..., f_m : \mathbb{N}^k \to \mathbb{N}, g : \mathbb{N}^m \to \mathbb{N}$ are primitive recursive, then $g \circ (f_1, ..., f_m). \mathbb{N}^k \to \mathbb{N}$, $(n_1, ..., n_k) \mapsto g(f_1(n_1, ..., n_k), ..., f_m(n_1, ..., n_k))$ is also primitive recursive;

    b. Primitive Recursive: If $g : \mathbb{N}^k \to \mathbb{N}, h : \mathbb{N}^{k+2} \to \mathbb{N}$ are primitive recursive then so if $f : \mathbb{N}^{k+1} \to \mathbb{N}$ with $f(0, n_1, ..., n_k) = g(n_1, ..., n_k)$ and $f(n + 1, n_1, ..., n_k) = h(n, f(n,n_1,...,n_k),n_1,...,n_k)$.

I don't understand what is happening in definition 2.b.

  1. Why do $f,g,h$ have different domains?
  2. What does $f(n + 1, n_1, ..., n_k) = h(n, f(n,n_1,...,n_k),n_1,...,n_k)$ do?
  3. What is the goal with this method?
2

There are 2 best solutions below

0
On

I think in (2b) it will help to consider the case $k=1$. It's the simplest case where we really see all the parts of (2b) play nontrivial roles.


The simplest point is the role of $g$: this provides us with our "starting values." Intuitively when we define $f$ we're given $f(0, n)$ right off the bat (no recursion needed). This is what $g$ provides us.

The role of $h$ is a bit more complicated. Intuitively, $h$ tells us how to define $f(a,b)$ in terms of "previous values" of $f$. The details are a little subtle. In order to figure out what $f(a+1,b)$ should be, $h$ is allowed three pieces of information: what $a$ is, what $b$ is, and what the "previous value" $f(a,b)$ is. It doesn't have to use all this information, but that's what it's allowed in principle.

(If you like you could imagine $h$ being given $a+1$, $b$, and $f(a,b)$ instead - this may feel a bit more natural since it's "seeing" the current inputs. But since we can figure out $a$ from $a+1$ and vice versa, these both give the same ultimate definition.)


To see this all in action, consider the following definition of multiplication $f(a,b)=a\cdot b$ in terms of addition:

  • $f(0,b)=0$.

  • $f(a+1,b)=a+f(a,b)$.

The first point tells us that we're looking at $$g: n\mapsto 0$$ (all our starting values are $0$). The second point tells us that $h$ should take in the data $$a, f(a,b),\mbox{ and }b$$ and output $a+f(a,b)$. So this tells us that $$h(x,y,z)=x+y.$$

Note that the third coordinate isn't playing a role in $h$. Primitive recursion does allow the "next-step-provider" $h$ to see both inputs and the previous value, but we don't need to use that information. In most natural examples I think we don't in fact need that.


Finally, it may also help to go in the opposite direction: given a $g$ and $h$, try to compute the first few values of the resulting $f$. For example, try $$g(0,b)=b,\quad h(x,y,z)=xyz.$$ What is $f(2,3)$?

0
On

I think, a simple example explains best what is happening: Let's take a look at the factorial function $f(n)=n!$. It is defined by

$f(0)=1$

$\forall n:f(n+1)=n\cdot f(n)$

Here, $h(n, f(n))=n\cdot f(n)$. So $h$ is the function that defines how you calculate the next value from the previous values.

The $f(n+1,n_1,\ldots,n_k)=h(n,f(n,n_1,\ldots,n_k),n_1,\ldots,n_k)$ just generalizes this concept to higher dimensions.