Shortcut to $x\uparrow \uparrow n$ for very large $n$ and $x\approx e^{(e^{-1})}$?

257 Views Asked by At

If the number $x$ is very close to $e^{(e^{-1})}$ , but a bit larger, for example $x=e^{(e^{-1})}+10^{-20}$, then tetrating $x$ many times can still be small. With $x=e^{(e^{-1})}+10^{-20}$ , even $x\uparrow \uparrow (10^8)$ (This is a power tower of $10^8$ $x's$ calculated from above) is smaller than $e$

Is there any shortcut to calculate such huge power-towers ? In other words, can I efficiently calculate $x\uparrow\uparrow n$ reasonably exact (lets say, with $6$ digits accuracy) ? The brute force method is quite slow and I am not sure whether it is even numerically stable.

2

There are 2 best solutions below

0
On BEST ANSWER

Does $\lim_{n\rightarrow \infty} \left(r+\frac{1}{n^2}\right)\uparrow \uparrow n=e$ hold?

From that answer, we use Gerald Edgar's tangent approximation for iterating $z \mapsto \exp(z)-1+\epsilon$ as congruent to iterating $y \mapsto b^y$ where $\epsilon=\ln(\ln(b))+1;\;\;\;y=(z-\epsilon+1)/\ln(b).\;$ An approximation for epsilon is $$\epsilon \approx (b-\eta)\frac{e}{\eta}+\mathcal{O}(b-\eta)^2\;\;\;\text{where}\;\;\;\eta=\exp(1/e)$$

Edgar's equation for iterating $f(z)=\exp(z)-1+\epsilon\;$ gives the approximation $$a=\sqrt{\epsilon/2};\;\;\;f^{\circ n} \approx 2a\tan(an-\frac{\pi}{2})+\epsilon$$

From there, This is the Tetration approximation equation I get: $$\text{tet}_b(x)\approx \frac{\left( 2a\tan\left(a(x+3)-\frac{\pi}{2}\right)-\epsilon+1\right)}{\ln(b)}$$ As epsilon approaches zero, this sexp approximation has $\text{sexp}(-1)\approx 0$ which is convenient.

So we can apply this equation easily enough to the Op's example, $$b=\eta+10^{-20};\;\;\;\text{tet}_b(10^8)\approx e -5.436393\cdot 10^{-8} $$

I wanted to quantify the error for the Op's Tetration example as tightly as possible and I was able to do so by using the slog error term. The inverse of tetration is the slog. The slog_approx can be generated from the $\tan^{-1}$ function. Lets call the slog error term for this equation $$\text{error}_b(x) =\text{slog_approx}(x)-\text{slog_exact}(x)$$

Then empirically, we observe the maximum slog error term, which is at the inflection point near e! Even though the absolute sexp error is at a local minimum near the inflection point at e, the slog error is at a maximum at the inflection point. The slog error term measures how many iterations the slog is off by for various values of epsilon. Here are the maximum error terms for epsilon (defined above) for various tetration bases.

epsilon=10^-2  max-err=0.5143557
epsilon=10^-3  max-err=0.8984021
epsilon=10^-4  max-err=1.2821615
epsilon=10^-5  max-err=1.6659268
epsilon=10^-6  max-err=2.0496911
epsilon=10^-7  max-err=2.4334552
epsilon=10^-8  max-err=2.8172194
epsilon=10^-9  max-err=3.2009836
epsilon=10^-10 max-err=3.5847478
epsilon=10^-11 max-err=3.9685120
epsilon=10^-12 max-err=4.3522762
max-err ~- -0.252988158 + 0.383773297*log_10(b)

Here is an error term graph for $\epsilon=10^-10;\;\;\;b=\eta+2.258441\cdot 10^{-10}$ Notice the slog error is zero at slog(0) and then grows slowly towards the maximum near e, and then becomes small again near slog(2e) where the error term is ~=0.09 The graph region covers a total 444283 iterations of $x \mapsto b^x$. The "detail blowup" of the slog error covers iterations 1147..443129 which still includes ~99.5% of the integer iterates.

error term for slog(x)

For b=10^-20 we predict a maximum slog error of 7.317 iterations. So if iterate eight times we get an upper bounds for the Op's tetration. $$z1 = e - 5.43639300244546\cdot 10^{-8};\;\;\;z2=b^{b^{b^{z1}...}}\;\;\;\text{eight iterations of }\; z\mapsto b^z$$ $$z2 = e - 5.43639256750679\cdot 10^{-8}$$

The exactly correct answer is somewhere between the two values; both z1 and a2 are accurate to more than 6 decimal digits as requested by the op since the two values only differ by about $z2-z1 \approx 4.35 \cdot 10^{-15}$ $$z1 < \;\;(\eta+10^-20)\uparrow\uparrow 10^8\;\;<z2$$.

11
On

This is not (yet) an answer but maybe helps. I turned the question another round: given $x_0=e$ how many iterations $x_{n+1}=\log(x_n)/\log(b)$ does it need to converge below $b$.

For an empirical set of data I used $b_j=e^{1/e} + e^{-j}$ (update $3$):

   j     n   N=n+ 2.61      log(N)         N^ = estimated by regression
   ------------------------------------------------------
   4     10  12.6148450000  2.53487429506  11.9592027589 
   5     17  19.6148450000  2.97628667751  19.7173919949 
   6     30  32.6148450000  3.48476755282  32.5084836270 
   7     51  53.6148450000  3.98182598869  53.5974285039 
   8     85  87.6148450000  4.47295044706  88.3672205441 
   9    143  145.614845000  4.98096508798  145.692916333 
  10    237  239.614845000  5.47903282175  240.207010462 
  11    393  395.614845000  5.98044112432  396.034408035 
  12    650  652.614845000  6.48098713137  652.950353305 
  13   1073  1075.61484500  6.98064772590  1076.53313761 
  14   1772  1774.61484500  7.48133868963  1774.90308489 
  15   2924  2926.61484500  7.98160169108  2926.32047330 
  16   4823  4825.61484500  8.48169343467  4824.68681550 
  17   7955  7957.61484500  8.98188459135  7954.56378753 
  18  13118  13120.6148450  9.48193992459  13114.8585327 
  19  21631  21633.6148450  9.98200362681  21622.7462532 
  20  35666  35668.6148450  10.4820264451  35649.8817250 

The regression-formula (using weighted regression with more weight at the large values) I got was with $a_0=1.61850208424$ and $a_1=0.500000001301$ $$\hat N = a_0 \cdot \exp (a_1 \cdot j)$$

The values of $a_0$ and $a_1$ are now very near to the values coming from Sheldon's answer in the other, linked thread and this answer and its table of regression might be seen now as illustration of the given formula there.

It might be interesting, that the additional constant term of $+2.614...$ means that we iterate back not only to $b$ but to $b\uparrow \uparrow -2.614...$ which is a number in the negative reals and the fractional part is some expression for the random behave to the neighbourhood between $0$ and $-\infty$.

If we want the base $b$ with $e^{-j}=10^{-20}$ this gives $j \approx 46.05617 $ and using the regression we get $\hat n = \hat N-2.6148 \approx 16´185´021´809 \approx 10^{10.20911}$

Of course, this gives no idea how well the regression fits if $j$ and $n$ increase substiantially - but I think at least up to $j=32$ this table should be extensible by Pari/GP and appropriate internal digits for computational precision.
However, the exponent in the last expression for $n$ being $10.20911$ with a base $b=e^{1/e} + 10^{-20}$ suggests, that this is also well reproducing Sheldon's formula in the comments.