Does an infinite iteration of a function still have my solution and why does it work?

1k Views Asked by At

I found the other day that you could find some solutions to an equation in the form $$f[f(x)]=x$$As a matter of fact, I found some solutions to$$f[f(f(\cdots f(x)\cdots))]=x$$

The solution, if one such existed in closed form, could be found with $$f(x)=x$$

Explanation?

$$f(x)=x$$Take the "$f$" of both sides to get:$$f[f(x)]=f(x)$$Then we take notice that the left and right of the original and manipulated equations are equal.

$$f[f(x)]=f(x)=x$$

We can repeat this process any amount of times we want.

But there are problems, as you may realize.

First of all, it doesn't provide all solutions to multivalued functions, for example:$$e^{e^x}=x$$The solution should be findable if it exists, using$$e^x=x$$

Our solution is in fact, $x=-W(-1)$, where $W$ is the Lambert W function.

However, since $e^x$ produces infinitely many $x$ to be possible, this method will only provide me with a subset of the full range of answers.

But it should still provide me with an answer that works, right?

Well, I don't know.

Which is why I ask if $x=-W(-1)$ works as a solution for $$e^{e^{e^{e\cdots^x}}}=x$$

One can clearly see that this solution works for any finite amount $e$'s using Lambert W identities, but does it work for an infinite amount?

And before you tell me that "no, it can't be equal because it tends towards infinite," bear in mind that $$-W(-1)$$ is a complex number. Infinitely many complex numbers at that.

I also note that you can reinforce this solution by raising both sides to the power of $e$, much of the same way we evaluate $$\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdots}}}}$$

More generally though, could you give me a function $$f[f(f(\cdots f(x)\cdots))]=x$$ where the solution derived from $f(x)=x$ didn't work?

Possibly where it didn't work for an infinite amount of iterations of the original function?

And if it does work, could you provide a proof on why it works, a proof different from my own?

Also, we are excluding piecewise functions because I've noticed that they, um, aren't the greatest when trying to solve for $x$ in $f[f(f(x))]=x$...and sometimes my method fails, I think.

3

There are 3 best solutions below

7
On BEST ANSWER

The following statement is true:

If $f(x)=x$ then $f^n(x)=x$ where $f^n$ is the $n$-fold iteration of $f$.

And it's true for the proof you provided. To give the whole inductive proof hinted at, it goes like:

The case $n=1$ is trivial as $f(x)=x$ is given. For the inductive step, we see the following: $$f^{n+1}(x)=f(f^n(x))=f(x)=x$$ where we use $f^n(x)=x$ in the middle step and $f(x)=x$ in the last step.

This holds for every function, piecewise or not. It may sometimes be vacuous, though, for functions with no fixed points.

It seems like you're not being careful about the fact that the converse of that statement is not true. That is, the following is false:

If $f^n(x)=x$ for some $n$, then $f(x)=x$.

Certainly, it is easy to construct piecewise functions for which this fails, as well as linear functions like $f(x)=-x$ where $f^2(1)=1$ but $f(1)\neq 1$. More dramatic failures are possible too - for instance, the function $$f(x)=4x(1-x)$$ does some nasty things under iteration. There are only two solutions to $f(x)=x$ which are $0$ and $\frac{3}4$. There are four solutions to $f(f(x))=x$ which are $0$ and $\frac{3}4$ (of course) as well as the two solutions given by $\frac{5\pm\sqrt{5}}8$. In fact, there are eight solutions to $f(f(f(x)))=x$ - all of them in $0\leq x < 1$. In general, there are $2^n$ solutions to $f^n(x)=x$ - so your method really doesn't do well in finding all the solutions, since we only found $2$ of them by solving $f(x)=x$ (but those solutions will always be valid - there's just a lot more).

I chose this function since it is unusual in that we can write $f^n(x)$ in closed form. In particular: $$f^n(x)=\sin^2(2^n\sin^{-1}(\sqrt{x})).$$ What one may notice about this is that if $\sin^{-1}(\sqrt{x})$ can be written as $\frac{2\pi a}b$ for integers $a$ and $b$ with $b$ odd, then $x$ is a solution of $f^n(x)=x$ for some $n$ - meaning that, not only do we get cycles of every length, every interval in $[0,1]$ is part of infinitely many distinct cycles. This is an interesting function, and is a special case of the logistic map which, if we replace the $4$ with other constants between $0$ and $4$, has other interesting properties. However, it goes to show that iterating a function can lead to very complex properties.

As for infinite iteration, it largely depends on how you define it. If you do something pointwise and say that $$f^{\infty}(x)=\lim_{n\rightarrow\infty}f^n(x)$$ then you discover that, usually $f^{\infty}$ won't make sense since the limit won't converge - so for that map $4x(1-x)$ this only exists at $0$ and $\frac{3}4$, which isn't so helpful, and for maps like $2x$, it only exists at $0$. In fact, one may see that $f(x)=x$ implies $f^{\infty}(x)=x$ easily - and even $f^{\infty}(x)=x$ implies $f(x)=x$ in this case (but this is because $f(f^{\infty}(x))=f^{\infty}(x)$ meaning that if $f^{\infty}(x)=y$ then $f(y)=y$ - that is, its range is only fixed points). One might also find functions like $f(x)=\frac{1}2x$ such that $f^{\infty}(x)=0$.

In the particular case of $f(x)=e^x$, the behavior under infinite iteration is not terribly easy to track - however, it is relatively easy to see that the only values for which $\lim_{n\rightarrow\infty}f^n(x)$ converges are fixed points and points that iterate to fixed points (i.e. any value of $\log(-W(-1))$ or a logarithm thereof suffices). In particular, the derivative of $f$ at any fixed point has absolute value greater than one, meaning that applying $f$ to points near a fixed point pushes them further away, thus no sequence can approach a fixed point without outright hitting it.

Also, if you are interested, the branch of mathematics that handles questions like these relating to the iteration of functions is called "Chaos theory". It is known for being a branch that produces lots of pretty pictures.

3
On

Consider the function $f: \mathbb{R}-\{0\} \rightarrow \{-1,1\}$ given by

$f(x)= \begin{array}{cc} \{ & \begin{array}{cc} 1 & x< 0 \\ -1 & x>0 \end{array} \end{array} $

Then $f$ has no fixed points yet $f \circ f$ has two fixed points, $\{-1,1\}$.

0
On

The method you found is a actually a well-known 'Fixed Point iteration' method and much material about conditions for convergence can be found on the internet. I have also found this method some days ago while playing with a mobile calculator app as follows. Type and compute the value of a function with some random initial guess. Then for next iteration, replace the initial guess in the typed expression by putting 'ANS'(by pressing its button on the calculator keypad which is generally near the '=' i.e. 'equal' sign) in place of the initial guess in the previously entered expression. For clarity, put 'ANS' in the bracket. Then press '=' sign key to get second iteration. Again press '=' sign key to get third iteration. All the following iterations can be obtained by just pressing '=' signed key continuously. You will find that in the cases when the answer converges, the digits of accurate precision will be stable and go on increasing and finally we arrive at a point when the accurate digits are more than or equal to the number of digits displayed by calculator display, then pressing '=' sign again and again does not change the answer. I also tried it on many polynomials, transcendental function but sometimes it worked sometimes it didn't. but when I searched on internet for similar works done about it, I found that it is not different than 'Fixed point iteration method'. When it doesn't converge, it oscillates between two or more numbers or it oscillates to finally reach infinity i.e. it diverges. To solve f(x)=0 for x, you extract 'x' from f(x) such that x=g(x). There can be many possibilities of getting such g(x). For each g(x), type that g(x) in calculator with initial guess as x=x0 then proceed as I explained before. Putting your calculator in complex mode can also give you complex roots in many situations. I am working more on this topic and will clarify any new findings about this scheme if I get them. You also work more on it and let us know about anything new. In MATLAB command window also, typing 'ans' for first iteration and then reevaluating this g(x) again and again can save lot of time and programming efforts. I found from experiments, in many cases g(x) should increase at slower rate than x for up to at least two iterations for g(x) to converge using x(n+1)=g(x(n)) .
For example. $x=e^x$ cannot be solved by $ x_{n+1}=e^{x_n} $ . We have to get x from the power of e as $ x=\ln x $ and then proceed as $ x_{n+1}=\ln (x_n)$ with $ x_0 $ as any random number of your choice. The solution you will get finally is a complex number $ 0.3181315052..+i1.33723570143... $. Interestingly, you will find that iterations for solution of $x=\sin(x), x= \tan x $ converges very slowly to solution at x=0. Many other interesting equations can be solved easily using this method.