How does one make a large number using computable methods?

239 Views Asked by At

Context: I'm starting another contest and I was interested in how one makes an extremely large finite number as simply as possible.

For starters, the Ackermann function is extremely simple:

$$A(0,n)=n+1\\A(m+1,0)=A(m,1)\\A(m+1,n+1)=A(m,A(m+1,n))$$

And this manages to reach the height of Knuth's up-arrow notation and Hyperoperations. There isn't much complicated about this function, and I'd say the reason it grows so fast is because it applies recursion against addition.

Indeed, this points out the strength of recursion to produce large numbers.

Likewise, there exists other ways to produce large numbers. One such example is the lifetime of a worm.

What principals make the lifetime of a worm so large?

What other computable ways can you make a large number?

2

There are 2 best solutions below

0
On BEST ANSWER

What principles make the lifetime of a worm so large?

It's the same principle for worms, hydras, Goodstein sequences, and even the fast growing hierarchy: a fixed "transformation rule" is being repeatedly applied to some particular kind of expression, stopping only when the expression reaches some special form (e.g. an empty string, an empty tree, the value $0$, etc.), the output being the number of repetitions required to accomplish this (or perhaps this number plus a starting number).

In all these cases, each expression $E$ can be seen as encoding a unique ordinal, $o(E)$, with the following properties:

  1. Each iteration decreases $o(E)$.

  2. If $o(E)$ is a successor ordinal, then the next iteration decreases $o(E)$ only to its predecessor.

  3. If $o(E)$ is a limit ordinal, then the next iteration decreases $o(E)$ to an ordinal whose Cantor normal form (CNF) nevertheless has an increased number of terms each of form $\omega^\beta$ (in fact, an arbitrarily large finite number of such terms will not prevent termination).

NB: (1) guarantees that the game will eventually end (when the encoded ordinal reaches $0$), because there can be no infinite decreasing sequence of ordinals, and (3) allows increasingly enormous jumps in the number of terms in the CNF of $o(E)$, which (2) only reduces very slowly (by removing one term of form $\omega^0$ each time) -- thus tending to require enormously many iterations before termination.

Example

Aside from worms, the simplest possible hydra game might be the following (see the link for the ordinal encoding scheme): Let $E$ be any ordinary balanced parenthesis expression -- e.g., (()())(()) -- and let $n$ be a positive integer. Let the transformation rule be $$<n,E>\ \to\ <n+1,r_{n+1}\{E\}>,$$ where $r_n\{E\}$ is defined by

$$r_n\{A(B)\} = \begin{cases} A, & \text{if }B\text{ is empty} \\ A\ (r_n\{B\})^n, & \text{otherwise.} \end{cases} $$ Now any $<n,E>$ eventually reduces to some $<n^*,\text{ empty}>$. In particular, it can be shown that the function computed by starting with $<n,(((...)))>$ (with $n$ nested bracket pairs) strictly dominates Goodstein's function.

Other examples: starting with $<n,((()))>$ produces $n^*$ greater than the diagonalized Ackermann function $A(n,n)$, and starting with $<1,((()()))>$ produces $n^*$ greater than Graham's number.

Here's a tiny illustration for $<n,E>=<1,(()())>$:

 n           E          o(E)
---   --------------  --------
 1    (()())            w^2
 2    (())(())          w2
 3    (())()()()        w+3
 4    (())()()          w+2
 5    (())()            w+1
 6    (())              w
 7    ()()()()()()()    7
 8    ()()()()()()      6
 9    ()()()()()        5
 10   ()()()()          4
 11   ()()()            3
 12   ()()              2
 13   ()                1
 14   empty             0

NB: It's worth repeating that when $E$ encodes a limit ordinal, the increase in the number of terms in the CNF of $o(E)$ can be made arbitrarily large and finite; e.g., when $B$ is not empty, we could have defined $$r_n\{A(B)\} = A(r_n\{B\})^{g(n)}$$ where $g$ is a positive function -- no matter how rapidly increasing -- and the process would still eventually terminate.

The ordinal encoding scheme for worms is described in Beklemishev's paper, The Worm Principle.

A couple of links at the googology wiki:

Kirby-Paris hydra

Beklemishev's worms

0
On

You can make some very massive numbers by looking at meta version of the goodstein sequences.

The Goodstein sequences can be defined by looking at a number $N$ in notation as

as $a_0 + a_1 b + a_2 b^2 + ... a_k b^k$

And basically repeatedly applying the operation of increment $b$, decrement $a_0$

Now this of course can be made a step further, say we represent "addition" as $H[1, a, b]$ where $H[n, a, b] = H[n-1, a, H[n-1, a ... ] ]$ denotes the $n^{th}$ hyperoperator.

Then you can write a number instead of as

$a_0 + a_1 b + a_2 b^2 + ... a_k b^k$

as $H[1, a_0, H[1, a_1 * b ... ] $

and at each stage, increment the first index, increment the b value, and decrement by 1.

The length of these sequences will end up with some ungodly massive numbers that will make the ackermann scale seem tiny. To get even wilder, we can invent a recursive goodstein

where instead of applying the goodstein procedure to the form

$$G( a_0 + a_1 b + a_2 b^2 + .. a_kb^k) \rightarrow (a_0 - 1) + a_1 ( b+1) ... a_k (b+1)^k$$

We actually do the following

$$ a_0 + a_1b + a_2 b^2 +... a_k b^k \rightarrow a_0 + a_1 X(b) + a_2 X(b)^2 .... a_k X(b)^k) $$

where $X(b)$ is the fastest growing function you can think of, then this produces a goodstein generalization of $X(b)$ that grows significantly faster.

You can mix these two ideas to sort of keep creating ever more general and faster growing sequences (w.r.t any scale that you look at).