A famous 1962 paper by Tibor Radó shows that the "busy beaver" function $h$ (which computes the maximal number of steps for which a halting Turing machine with $n$ states can run for) satisfies the property
- (A) for every computable total function $f\colon\mathbb{N}\to\mathbb{N}$ we have $h(n) \geq f(n)$ eventually (i.e., there exists $N$ such that for all $n\geq N$ we have $h(n) \geq f(n)$)
Now what is very easy to prove is the weaker
- (B) for every computable total function $f\colon\mathbb{N}\to\mathbb{N}$ we have $h(n) \geq f(n)$ infinitely often (i.e., for each $N$ there exists $n\geq N$ such that $h(n) \geq f(n)$)
(because if there were a computable $f$ such that $f(n)\geq h(n)$ eventually we could find one such that $f(n)\geq h(n)$ always, and then we could solve the halting problem for $n$ state Turing Machines by waiting at most $f(n)$ steps).
Now Radó's proof is fairly opaque, it looks like a bit of magic, I can't figure out the deep reason why he is able to prove (A) and not just (B): is it something really specific to the busy beaver function, or is there some general trick that makes it possible to derive (A) from (B)? So:
Does an increasing function $h$ that satisfies (B) automatically satisfy (A)? If not, is there some simple condition we can add to get (A) from (B) that would "explain" Radó's proof?
And, if the previous questions do not make this trivial:
Are there Turing degrees containing functions $h$ satisfying (B) but no function satisfying (A)?
I haven't read Radó's proof closely, but the following way to (A) seems to be pretty straightforward, at least for the busy-beaver output function $BB_o$, where $BB_o(n)$ is the maximal number of
1s left on the tape after an $n$-state machine terminates:Let $f$ be any total computable function. Then there is a number $m$ such that the program
can be implemented in a machine with exactly $m$ states. (Basically this is just a matter of observing that we can construct the number $2^k$ using $O(k)$ states, for example). Furthermore we can insert any number of
X=X+1between steps 1 and 2, each costing exactly one state (namely move left and write a1on the tape). So the above program can actually be implemented in $m$ states for all sufficiently large $m$.Now if $f(m) \ge BB_o(m)$ for infinitely many $m$, then one of those $m$s must be sufficiently large, but that leads to a contradiction, because the machine alluded to above has $m$ states, and so by definition $BB(m)$ must be at least $f(m)+1$.
Thus we must have $f(n) < BB_o(m)$ eventually.
Your goal is for the busy-beaver time function, $BB_t(n)$ is maximum number of steps an $n$-state machine can take before it terminates. But this easy reduces to the above case: since $BB_t(n)\ge BB_o(n)$, a function that exceeds $BB_t(n)$ infinitely often must also exceed $BB_o(n)$ infinitely often -- which we've just seen is impossible for computable functions.
For a $h$ that satisfies (B) but not (A) we can take
$$ h(n) = \begin{cases} 42 & \text{if } n=0 \\ BB(n) & \text{if } h(n-1) < (n-1)^2 \\ h(n-1)+1 & \text{otherwise} \end{cases} $$
Then $h(n) < n^2$ infinitely often, so $f(n)=n^2$ is a counterexample to (A).
On the other hand $h(n)=BB(n)$ infinitely often, so an $f$ that eventually exceeds $h(n)$ would exceed $BB(n)$ infinitely often, which is impossible by the above argument.