shows how Jonathan Bowers defines numbers using $2D$-arrays.
I would like to get a feeling how big such numbers are.
How can the number
$$\left\langle \matrix {3&3\\3&3}\right\rangle $$
be described using numbers defined by linear arrays ?
How much bigger is
$$\left\langle \matrix {4&4\\4&4}\right\rangle $$
then the above number ?
You can't really visualize planar arrays easily in terms of linear arrays other than to understand the rules of reduction, just as a multivariable Ackermann number like A(9,9,9,9) can't be easily expressed as a binary Ackermann number - the numbers just get too large. It's true that a nonlinear planar array with first entry $3$ will eventually reduce to a linear array of the form $<3,3,\ldots,3>$, but the number of entries will usually be so large as to be indistinguishable from the number defined by the planar array itself.
We have
$ \left< \begin{array}{cc} 3 & p \\ 2 \\ \end{array} \right> = <3,3,\ldots,3>$ with $p$ entries
and then
$ \left< \begin{array}{c@{}c@{}c} 3 & p+1 & 2 \\ 2 \\ \end{array} \right> = \left< \begin{array}{c@{}c@{}c} 3 & \left< \begin{array}{ccc} 3 & p & 2 \\ 2 \\ \end{array} \right> & \\ 2 \\ \end{array} \right> $
so basically $ \left< \begin{array}{c@{}c@{}c} 3 & p & 2 \\ 2 \\ \end{array} \right>$ iterates over the function $ \left< \begin{array}{c@{}c@{}c} 3 & p \\ 2 \\ \end{array} \right>$. Similarly, $ \left< \begin{array}{c@{}c@{}c} 3 & p & 3 \\ 2 \\ \end{array} \right>$ iterates over the function $ \left< \begin{array}{c@{}c@{}c} 3 & p & 2\\ 2 \\ \end{array} \right>$ and so on. Then, $ \left< \begin{array}{c@{}c@{}c@{}c} 3 & p & 1 & 2 \\ 2 \\ \end{array} \right>$ iterates over the function $ \left< \begin{array}{c@{}c@{}c@{}c} 3 & 3 & p \\ 2 \\ \end{array} \right>$, and so on for more and more entries in the first row. This is the same recursive construction as linear arrays, except it starts from the base function $ \left< \begin{array}{cc} 3 & p \\ 2 \\ \end{array} \right> = <3,3,\ldots,3>$, rather than $<b,p> = b^p$.
Next we have
$ \left< \begin{array}{cc} 3 & p \\ 3 \\ \end{array} \right> = \left< \begin{array}{ccc} 3 & 3 & \ldots & 3 \\ 2 \\ \end{array} \right> $
which leads to a third linear array hierarchy starting from the base function $ \left< \begin{array}{cc} 3 & p \\ 3 \\ \end{array} \right>$. So that should give you an general idea about arrays of the form $\left< \begin{array}{ccc} b & p & \ldots \\ n \\ \end{array} \right> $; they form an infinite sequence of linear array hierarchies, each one diagonalizing over the previous.
Next is
$\left< \begin{array}{cc} 3 & p+1 \\ 1 & 2 \\ \end{array} \right> = \left< \begin{array}{cccc} 3 & 3 & \ldots & 3 \\ \left< \begin{array}{cc} 3 & p \\ 1 & 2 \\ \end{array} \right> \\ \end{array} \right> $
so $\left< \begin{array}{cc} 3 & p \\ 1 & 2 \\ \end{array} \right>$ iterates over the previously described sequence of linear array hierarchies. This leads to another sequence of hierarchies of the form $\left< \begin{array}{cc} b & p & \ldots \\ n & 2 \\ \end{array} \right>$, which $\left< \begin{array}{cc} 3 & p \\ 1 & 3 \\ \end{array} \right>$ iterates over, and so on. Then $\left< \begin{array}{ccc} 3 & p \\ 1 & 1 & 2 \\ \end{array} \right>$ iterates over $\left< \begin{array}{ccc} 3 & 3 & \ldots \\ 1 & n \\ \end{array} \right>$, and similarly for further entries in the second row.
Then $ \left< \begin{array}{cc} 3 & p \\ 1 \\ 2\\ \end{array} \right> = \left< \begin{array}{cc} 3 & 3 & \ldots & 3 \\ 3 & 3 & \ldots & 3 \\ \end{array} \right>$
with $p$ entries in each row, so $ \left< \begin{array}{cc} 3 & p \\ 1 \\ 2\\ \end{array} \right>$ diagonalizes over the first two rows.
I hope this gives a good idea for how the rules work. You can see that there's no way to describe any but the simplest planar arrays in terms of linear arrays without lots and lots of recursion.