Solving for best fit value $C$ in $\sqrt {\mathrm{Exp}_a^{[1/2]} (x) \cdot \mathrm{Exp}_b^{[1/2]} (x )} \sim\sim \mathrm{Exp}_C^{[1/2]} (x).$

195 Views Asked by At

Let $\mathrm{Exp}_t^{[y]} (x)$ denote the $y$ th iteration of the exponential function with base $t$ : $t^x.$

For example $\mathrm{Exp}_t^{[1]} (x) = t^x$.

Let $\sim\sim$ denote best fit.

Now as $x$ Goes to positive infinity and a pair $(a,b)$ with $e<a<b$ Is given , I wonder how to find the best fit base value $C$ such that

$$\sqrt { \mathrm{Exp}_a^{[1/2]} (x) \cdot \mathrm{Exp}_b^{[1/2]} (x) } \sim\sim \mathrm{Exp}_C^{[1/2]} (x). $$

Lets define then $C = f(a,b)$, assuming $a< f(a,b) < \sqrt{ab} < b $.

How to improve those bounds ?

How to find the value $C$ ?

Below: Edit

There are many solutions to tetration , but I am talking here about solutions where $x>1$ , $b > a>e$ implies $\mathrm{Exp}_b^{[1/2]}(x) > \mathrm{Exp}_a^{[1/2]} (x)$.

Notice that in that case $\mathrm{Exp}_t^{[1/2]} (x) $ is asymptotic to $2 \sinh_t^{[1/2]} (x) $ and

$$ 2\sinh_t (x) = t^x - t^{-x} $$

And $^{[1/2]}$ means half-iterate as usual.

Notice $ 2\sinh_t $ does have a hyperbolic fixpoint at $ x= 0$. So by using koenigs function we get a solution from that fixpoint.

Also notice this implies the entire post can be reformulated by rewriting Every $\mathrm{Exp}_t $ with $2\sinh_t$.


So we get the possibly easier :

Let $2\sinh_t^{[y]} (x) $ denote the $y$ th iteration of the 2 times sinh function with base $t$ : $t^x - t^{-x}.$

For example $2sinh_t^{[1]} (x) = t^x - t^{-x} $

Now as $x$ Goes to positive infinity and a pair $(a,b)$ with $e<a<b$ Is given , I wonder how to find the best fit base value $C$ such that

$$\sqrt { 2\sinh_a^{[1/2]} (x) \cdot 2\sinh_b^{[1/2]} (x) } \sim\sim 2\sinh_C^{[1/2]} (x)~. $$

This edit might be helpful to solve the problem and to clarify the (goal of the) question, avoid confusion, and address Sheldon's comments.

See Koenigs function, tetration, and the strongly related

End of Edit

2

There are 2 best solutions below

2
On

Consider the function $$g(x) = \text{slog}_e(\text{sexp}_2(z+0.5))-\text{slog}_e(\text{sexp}_2(z))-0.5$$

if $g(z)=0\;$ then $\;\exp_e^{0.5}(\text{sexp}_2(z))=\exp_2^{0.5}(\text{sexp}_2(z))\;\;$ since $\;\exp_e^{0.5}(\text{sexp}_2(z))=\text{sexp}_2(z+0.5)\;$ so we are comparing the slog_e of z and the half iterate (base2) of z.

g(z) applies for base_2 and base_e, but any bases can work. The op says, "USUALLY $\exp_e^{0.5}>\exp_w^{0.5}$, which would imply $g(z)<0$, but as z gets arbitrarily large, g(z) spends half of its time positive and half of its time negative. If z is large enough, when can easily show that $g(z+1) \approx g(z)$ and $g(z+0.5) \approx -g(z)$, where the approximation gets arbitrarily good as z increases.

First we show, z is large enough, $\text{slog}_e(\text{sexp}_2(z+1)) \approx \text{slog}_e(\text{sexp}_2(z))+1$ Step1: $$\text{slog}_e(\text{sexp}_2(z)) = \text{slog}_e(2^{\text{sexp}_2(z-1))})$$ $$\text{slog}_e(\text{sexp}_2(z)) = \text{slog}_e(\ln(2^{\text{sexp}_2(z-1))}))+1$$ $$\text{slog}_e(\text{sexp}_2(z)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z-1))+1$$ similarly we can write an equation for slog_e(sexp_2(z+1)) in terms of sexp_2(z-1) $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z))+1$$ $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot 2^{\text{sexp}_2(z-1)})+1$$ $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z-1)+\ln(\ln(2)))+2$$ If z is large enought, then sexp_2(z-1) is large enough to make the ln(ln(2)) term completely insignificant. $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z-1))+2+O\frac{1}{\text{sexp}_2(z-1)} = \text{slog}_e(\text{sexp}_2(z))+1+O\frac{1}{\text{sexp}_2(z-1)} $$ With a little bit of algebra $g(z+1) = g(z)+O\frac{1}{\text{sexp}_2(z-1)}$ where g(z+1) approaches g(z) as z increases. With a little bit algebra, we can also show that $g(z+0.5)=-g(z) + O\frac{1}{\text{sexp}_2(z-1)}\;$ therefore if $g(z)=0\;\;g(z+0.5)\;$ also approaches zero as z increases. Therefore, unless g(z) goes to zero for all z as z gets arbitrarily large, g(z) will spend half of its time positive and half of its time negative.

Here are two graphs of g(z), from -1 to 8, and another showing the asymptotic behavior. The first zero crossing occurs at x1~=4.61986470857217, sexp_2(x1)~=4.78924742892085E72 followed by x2~=4.91660. subsequent zero crossings occur at x~=5.41812556847432+0.5n, for integers n.

g(z) from -1 to 8

g(z) from 4.5 to 8

3
On

Mick, the Op commented, "Your prediction must be false Sheldon. Notice that if in the interval [0,t] for any t>0 it is ordered then by induction it is ordered in [t,oo]....

Mick was seeking to change from the 1/2 iterate of $\exp_a;\;\exp_b$ which are not ordered as x gets arbitrarliy large, to using the half iterates of $\text{2sinh}_a;\;\text{2sinh}_b$ which Mick thought would be ordered. That doesn't match my results. Define $S_e$ as the superfunction for 2sinh for base e, $\text{2sinh}_e(z)=e^z-e^{-z}$, and define $S_2$ as the superfunction for 2sinh for base 2, $\text{2sinh}_2(z)=2^z-2^{-z}$. These half iterates are generated from the fixed of zero by from the Koenig's method, using the Schröder equation to generate the two analytic superfunctions below:

$$S_e(z) = \text{2sinh}_e^{[z]}\;\;\;S_2(z) = \text{2sinh}_2^{[z]}$$ $$S_e(z+1) = \text{2sinh}_e(S_e(z));\;\;\;S_2(z+1) = \text{2sinh}_2(S_2(z))$$

Exactly analogous to before, consider the function $$g(x) = S^{-1}_2(S_e(x+0.5))-S^{-1}_2(S_e(x))-0.5$$

if $g(x)=0\;$ then $\;\text{2sinh}_e^{0.5}(S_e(x))=\text{2sinh}_2^{0.5}(S_e(x))\;\;$ since $\;\text{2sinh}_2^{0.5}(S_e(x))=S_e(z+0.5)\;$ so we are comparing the half iterate base 2 with the half iterate base e.

$g(x)$ applies for base_2 and base_e, but any bases can work. The op hopes that $\forall x; \; \text{2sinh}_e^{0.5}(x)>\text{2sinh}_2^{0.5}(x)$, which would imply $\forall\; x\; g(x)>0$, but computationally as $x \to \infty, g(x)$ spends half of its time positive and half of its time negative. If x is large enough, when can easily show that $g(x+1) \approx g(x)$ and $g(x+0.5) \approx -g(x)$, where the approximation gets arbitrarily good as x increases.

First we show, x is large enough, $S^{-1}_2(S_e(x+1)) \approx S^{-1}_2(S_e(x))+1$, then we show that if x is large enough $g(x+0.5)=-g(x)$. Therefore, unless $g(x)$ goes to $0+\epsilon\;\forall\; x$ as $x\to \infty$, g(x) will spend half of its time positive and half of its time negative. One can write a $\text{basechange}S_2(x)$ like equations as the limit of $\text{2sinh}_2^{[-n]}(S_e(x+n))$, which I conjecture would be the only solution (except for a constant) for $g(x)$ to go to $0+\epsilon \; \forall\;x$ as $x\to\infty\;$. Basechange type equations converge beautifully at the real axis, but they don't converge in any size radius in the conmplex plane; so the basechange is conjectured $C_\infty$ nowhere analytic. That is why I expected that the $\text{basechangeS}_2(x) \ne S_2(x)$ since we know $S_2(z)$ is analytic. And therefore, I wouldn't expect $g(x)$ to go to $0+\epsilon\;\forall\;x$ as $x \to \infty$. Computations agree. The first "zero" crossing corresponds to x=8.92760980698518338019E59, for which $\text{2sinh}^{0.5}_e(x)=\text{2sinh}^{0.5}_2(x).$ And once again from the graph below: $$\lim_{x \to \infty} g(x) \ne 0 \;\forall x$$

This is a graph of $g(x)$ with x ranging from 3..6, showing the 50% duty cycle as gets arbitrarily large.

graph of g(x) from 3..6

For the remaining steps, we assume without being rigorous, that if x is large enough then $\text{2sinh}_e(x) \approx e^x\;$ and likewise for $\text{2sinh}_2(x)=2^x$, and then $\epsilon$ is insignificantly small in the equations below, provided $S_e(x-1)$ is large enough. Then following the same steps as in hte earlier answer, one can conclude: $$S^{-1}_2(S_e(x+1)) = S^{-1}_2\left(\frac{S_e(x-1)}{\ln(2)} -\ln(\ln(2)) +\epsilon\right)+2$$

If x is large enough, then $S_e(x-1)$ is large enough to make the ln(ln(2)) term completely insignificant, and that $\epsilon$ is even more insignificant.

Continuing on as before, with a little bit of algebra $g(x+1) = g(x)+O\frac{1}{S_e(x-1)}$ where g(x+1) approaches g(x) as z increases. With a little bit algebra, we can also show that $g(x+0.5)=-g(x) + O\frac{1}{S_e(z-1)}\;$ therefore if $g(x)=0\;\;g(x+0.5)\;$ also approaches zero as x increases.