I found the other day that you could find some solutions to an equation in the form $$f[f(x)]=x$$As a matter of fact, I found some solutions to$$f[f(f(\cdots f(x)\cdots))]=x$$
The solution, if one such existed in closed form, could be found with $$f(x)=x$$
Explanation?
$$f(x)=x$$Take the "$f$" of both sides to get:$$f[f(x)]=f(x)$$Then we take notice that the left and right of the original and manipulated equations are equal.
$$f[f(x)]=f(x)=x$$
We can repeat this process any amount of times we want.
But there are problems, as you may realize.
First of all, it doesn't provide all solutions to multivalued functions, for example:$$e^{e^x}=x$$The solution should be findable if it exists, using$$e^x=x$$
Our solution is in fact, $x=-W(-1)$, where $W$ is the Lambert W function.
However, since $e^x$ produces infinitely many $x$ to be possible, this method will only provide me with a subset of the full range of answers.
But it should still provide me with an answer that works, right?
Well, I don't know.
Which is why I ask if $x=-W(-1)$ works as a solution for $$e^{e^{e^{e\cdots^x}}}=x$$
One can clearly see that this solution works for any finite amount $e$'s using Lambert W identities, but does it work for an infinite amount?
And before you tell me that "no, it can't be equal because it tends towards infinite," bear in mind that $$-W(-1)$$ is a complex number. Infinitely many complex numbers at that.
I also note that you can reinforce this solution by raising both sides to the power of $e$, much of the same way we evaluate $$\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdots}}}}$$
More generally though, could you give me a function $$f[f(f(\cdots f(x)\cdots))]=x$$ where the solution derived from $f(x)=x$ didn't work?
Possibly where it didn't work for an infinite amount of iterations of the original function?
And if it does work, could you provide a proof on why it works, a proof different from my own?
Also, we are excluding piecewise functions because I've noticed that they, um, aren't the greatest when trying to solve for $x$ in $f[f(f(x))]=x$...and sometimes my method fails, I think.
The following statement is true:
And it's true for the proof you provided. To give the whole inductive proof hinted at, it goes like:
This holds for every function, piecewise or not. It may sometimes be vacuous, though, for functions with no fixed points.
It seems like you're not being careful about the fact that the converse of that statement is not true. That is, the following is false:
Certainly, it is easy to construct piecewise functions for which this fails, as well as linear functions like $f(x)=-x$ where $f^2(1)=1$ but $f(1)\neq 1$. More dramatic failures are possible too - for instance, the function $$f(x)=4x(1-x)$$ does some nasty things under iteration. There are only two solutions to $f(x)=x$ which are $0$ and $\frac{3}4$. There are four solutions to $f(f(x))=x$ which are $0$ and $\frac{3}4$ (of course) as well as the two solutions given by $\frac{5\pm\sqrt{5}}8$. In fact, there are eight solutions to $f(f(f(x)))=x$ - all of them in $0\leq x < 1$. In general, there are $2^n$ solutions to $f^n(x)=x$ - so your method really doesn't do well in finding all the solutions, since we only found $2$ of them by solving $f(x)=x$ (but those solutions will always be valid - there's just a lot more).
I chose this function since it is unusual in that we can write $f^n(x)$ in closed form. In particular: $$f^n(x)=\sin^2(2^n\sin^{-1}(\sqrt{x})).$$ What one may notice about this is that if $\sin^{-1}(\sqrt{x})$ can be written as $\frac{2\pi a}b$ for integers $a$ and $b$ with $b$ odd, then $x$ is a solution of $f^n(x)=x$ for some $n$ - meaning that, not only do we get cycles of every length, every interval in $[0,1]$ is part of infinitely many distinct cycles. This is an interesting function, and is a special case of the logistic map which, if we replace the $4$ with other constants between $0$ and $4$, has other interesting properties. However, it goes to show that iterating a function can lead to very complex properties.
As for infinite iteration, it largely depends on how you define it. If you do something pointwise and say that $$f^{\infty}(x)=\lim_{n\rightarrow\infty}f^n(x)$$ then you discover that, usually $f^{\infty}$ won't make sense since the limit won't converge - so for that map $4x(1-x)$ this only exists at $0$ and $\frac{3}4$, which isn't so helpful, and for maps like $2x$, it only exists at $0$. In fact, one may see that $f(x)=x$ implies $f^{\infty}(x)=x$ easily - and even $f^{\infty}(x)=x$ implies $f(x)=x$ in this case (but this is because $f(f^{\infty}(x))=f^{\infty}(x)$ meaning that if $f^{\infty}(x)=y$ then $f(y)=y$ - that is, its range is only fixed points). One might also find functions like $f(x)=\frac{1}2x$ such that $f^{\infty}(x)=0$.
In the particular case of $f(x)=e^x$, the behavior under infinite iteration is not terribly easy to track - however, it is relatively easy to see that the only values for which $\lim_{n\rightarrow\infty}f^n(x)$ converges are fixed points and points that iterate to fixed points (i.e. any value of $\log(-W(-1))$ or a logarithm thereof suffices). In particular, the derivative of $f$ at any fixed point has absolute value greater than one, meaning that applying $f$ to points near a fixed point pushes them further away, thus no sequence can approach a fixed point without outright hitting it.
Also, if you are interested, the branch of mathematics that handles questions like these relating to the iteration of functions is called "Chaos theory". It is known for being a branch that produces lots of pretty pictures.