Recursive Definition Theorem

541 Views Asked by At

I found two versions of a theorem which guarantees that recursive defined functions do exist (or that recursive definitions do make sense).

The first is:

Given a function $f:X\to X$ and an element $a \in X$there exists a unique function $F: \mathbb{N} \to X$ such that $F(0) = a$ and $F(n+1)=f(F(n))$

The other one is:

Given $n$ ($\in \mathbb{N} \backslash \{ 0\}$) functions $V_n: X^n \to X $ there exists a unique function $F: \mathbb{N} \to X$ such that $F(0) = a$ and $F(n+1) = V_{n+1}(F(0), F(1), ..., F(n))$.

I don't get the intuition for this. I know what the purpose basically is: to have a recursively constructing definition or so, but what is the intention to define $F$ in one or the other way? I guess you get that element $a$ in $X$ and than start "hopping around" in a certain order.

How does it work?

Thanks in advance.

1

There are 1 best solutions below

9
On BEST ANSWER

These two definitions correspond to the fact that when defining a recursive function, you can either make it depend only on the last value, or on all smaller values and on $n$.

In fact the first definition is very retrictive, because it means that you can just look at the last value and deduce the current one, without even knowing $n$.

For instance for factorial, this is not enough information, you have $f(0)=1$ and $f(n)=nf(n-1)$. In particular $f(0)=f(1)$ but $f(1)\neq f(2)$, so $f$ cannot depend only on the last value.

A case where you can use the first definition is for the Collatz function for instance, or for $f(n)=2^n$.