I'm working on the half iteration of the exponential function. No one has any idea what fractional iterations could mean but I think intuitively it should be a function $f(x)$ such that $f(f(x))=e^x$.
Here's how I'm finding $f(x)$ when $x\approx 0$:
If $x\approx 0$, then, we have, $$e^x\approx 1+x+\frac{x^2}{2}$$. ...(1)
Now, if we assume the required function $f(x)$ to be of the form $ax^2+bx+c$, then $$f(f(x))= a^3x^4+2a^2bx^3+(2a^2c+ab^2+ab)x^2+(2abc+b^2)x+ac^2+bc+c$$
But, since $x\approx 0$ therefore,
$$f(f(x))=e^x\approx ac^2+bc+c+(2abc+b^2)x+(2a^2c+ab^2+ab)x^2$$. ....(2)
Comparing coefficients of like powers of $x$ in equation (1) and (2), we get,
$$ac^2+bc+c=1 \tag {3.1}$$ $$2abc+b^2=1 \tag {3.2}$$ $$2a^2c+ab^2+ab=\frac{1}{2} \tag {3.3}$$
The problem is solving these equations. I've tried substitution but they get reduced to a polynomial of very high degree which I don't know how to solve. Is there some way to solve these to get $a$, $b$, and $c$ and hence get the required half iteration function of $e^x$ as $ax^2+bx+c$? Please tell me how to solve these three equations.
There is a method using Carlemanmatrices which gives increasingly well approximations.
Consider the family of polynomials $$g_t(x) = 1+ x + \frac{x^2}{2!} +...+ \frac{x^t}{t!} $$ with the attempt to find increasingly precise approximations with increasing $t$ $$ f_t(f_t(x)) \approx g_t(x) \approx e^x $$
For some chosen $t$ define the Carlemanmatrix $G$ for $g_t(x)$ (I use a version which is transposed against the wikipedia-entry), for instance $t=2$ $$ G_2 = \left[\small \begin{array} {} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 0 & 1/2 & 2 & 9/2 & 8 & 25/2 & 18 & 49/2 \\ 0 & 0 & 1 & 4 & 10 & 20 & 35 & 56 \\ 0 & 0 & 1/4 & 9/4 & 17/2 & 45/2 & 195/4 & 371/4 \\ 0 & 0 & 0 & 3/4 & 5 & 37/2 & 51 & 469/4 \\ 0 & 0 & 0 & 1/8 & 2 & 45/4 & 41 & 931/8 \\ 0 & 0 & 0 & 0 & 1/2 & 5 & 51/2 & 92 \end{array}\right]$$ We see the coefficients of our function $g_2(x)$ in the second column, that of $g_2(x)^0 = 1$ in the fist columns, that of $g_2(x)^2 $ in the third , that of $g_2(x)^3 $ in the fourth column and so on. The key is here, that with a vector $V(x)=[1,x,x^2,x^3,...]$ up to the correct dimension we can do the dotproduct $$ V(x) \cdot G_2 = V(g_2(x))$$ Actually, when we using software then the columns of higher indexes are always truncated versions of the powers of $g_2(x)$ so empirically we must take the approximations with a grain of salt.
THe key is now, that because of the form of the Carlemanmatrices, the "output" has the same form as the "input", and we can repeat the application like $$ V(x) \cdot G_2 = V(g_2(x)) \\ V(g_2(x)) \cdot G_2 = V(g_2°^2(x)) \\ V(g_2°^2(x)) \cdot G_2 = V(g_2°^3(x)) \\ $$ or more concisely, because we have associativity in the matrix-products $$V(x) \cdot G_2^h = V(g_2°^h(x)) $$ We see, that the h'th power of $G_2$ gives the h'th iterate of $g_2(x)$ and we can assume, that inserting $h=1/2$ gives -at least- an approximation to $g_2°^{1/2}(x)=f(x)$
What we need is a matrix-function for finding the square-root; this can either be done by diagonalization (implemented in Pari/GP and larger CAS-systems) of by Newton-iteration.
What I find for $G_2^{0.5} $ is
and we see, that the coefficients in the second column gives some approximation to Sheldon's
halfe- function.Also the first three coefficients $(c=) 0.49649737 , (b=) 0.88272304, (a=) 0.29626378 $ give some approximation to the values of $(c,b,a)$ which I gave in my earlier comment and which solve your system of equations in (3.1) to (3.2).
Now if we do a better approximation of $g_t(x) $ to the true $\exp(x)$ - function, by, say, $t=8$ we get better approximations to Sheldon's Kneser-solution. Let $G_8$ be defined with size 16x16 then we get its top left:
and the square-root $G_8^{0.5} $ is
The coefficients in the second column are now better approximations to Sheldon's solutions and give a better solution for $f(x)$ to approximate $f(f(x))\approx \exp(x)$
You see the principle. Ideally the Carlemanmatrix is of infinite size and also the polynomial is of infinite order (or better: equals the exponential-series).
By the logic of the Carleman-matrices the following method seems to be more inaccurate, but the approximation pattern towards the Kneser-solution seems to be even better.
Here is a list for the coefficients for $f_t(x)$ for $t=3..16$ where the Carlemanmatrices $G_t$ are also truncated to size $t \times t$ (and not $2t \times 2t$, $3t \times 3t$ or the like). I've written them horizontally for better visual comparability of approximation towards the Kneser-solution:
Kneser
So I think, the Kneser-solution is the limit for this process when $t \to \infty$