A simple approximation algorithm?

79 Views Asked by At

I'm not sure if this method works perfect, but I have found it to work in approximating things easily, that is, you need no more than simple algebra to understand this method.

Suppose you are trying to solve $f(x)=x$. By my method, start with $a_0=$ whatever.

We have $a_{n+1}=f(a_n)$ and $x=\lim_{n\to\pm\infty}a_n$. For those who don't understand what a limit is, it simply means we are doing something a lot of times. (Press the button a lot)

Now, as an example, it is commonly known that $\cos(x)=x$ can be solved in this manner.

Say we start with $a_0=0$,

$$a_1=\cos(0)=1$$

$$a_2=\cos(1)=\dots$$

$$a_3=\cos(\dots)$$

And so on. Eventually, we will get the solution to some desired accuracy.

Another example could be $f(x)=x^2-1$. I tried starting with $a_0=0$:

$$a_1=-1$$

$$a_2=0$$

$$a_3=-1$$

This one oscillates, which is a sign that we need to use a different tactic:

$$a_{n+1}=f(a_n)\to f^{-1}(a_n)=a_{n-1}$$Attempt to find $\lim_{n\to-\infty}a_n$ with $f^{-1}(x)=\sqrt{x+1}$ and $f^{-1}(x)=-\sqrt{x+1}$. (We'll stick with the positive square root first)

$$a_0=0$$

$$a_{-1}=1$$

$$a_{-2}=\sqrt{2}$$

$$a_{-3}=\sqrt{\sqrt{2}+1}\approx1.55377$$

$$a_{-4}\approx1.59805$$

$$\vdots$$

$$a_{-10}\approx1.618017$$

One of the correct answers to $x^2-1=x$ is $x=\frac{1+\sqrt{5}}2\approx1.68034$, so our approximation is pretty good.

The second solution is $x=\frac{1-\sqrt{5}}2\approx-0.618034$ (which can be found using $f^{-1}(x)=-\sqrt{x+1}$)

If the problem is in the form of $f(x)=k$, just use $g(x)=f(x)-k+x=x$.

So does this method converge quickly? If not, isn't it one of the easiest ways by which you can approximate values?

EDIT

I now know that this is called Fixed-point iteration thanks to the comments, but what about my method by which I approximate something when it doesn't converge by using the inverse function method?

3

There are 3 best solutions below

6
On

This method is called Banach fixed point iteration or contraction mapping theorem. From the second name you can guess that it does not work for all f but only for contractions, see wikipedia. Your trick consists of applying the theorem to the inverse of f instead of f itself.

0
On

From the Wikipedia page on fixed point iteration you can see that this method works, for instance, if $f$ is Lipschitz continuous with Lipschitz constant $L<1$.

If $f$ is differentiable, this condition amounts at having $|f'(x)|\le L<1$ in a neighbourhood of the fixed point $x_0$, and starting with a point inside that neighbourhood.

Your inverse function method works because $D(f^{-1})(x_0)=1/f'(x_0)$: if $f'(x_0)>1$ you then have $D(f^{-1})(x_0)<1$.

0
On

A method that very often works is the method of the secant. Say you want to find $x^*$ such that $f(x^*) = 0$, and you start with approximations $x_0, x_1$ (it helps if $f(x_0)$ and $f(x_1)$ are of opposite signs, but it isn't required). Then you compute (note that you can reuse previous values, just need to compute $f$ once per iteration):

$\begin{align} x_{n + 1} = x_n - f(x_n) \frac{x_n - x_{n - 1}}{f(x_n) - f(x_{n - 1})} \end{align}$

(this has to be computed this way, the fraction subtracts very similar numbers when near a zero $x^*$, and this makes it's value hard to compute exactly; but that doesn't matter as the second term is a small correction to $x_n$ since $f(x_n)$ is near zero). More often than not, $x_n \to x^*$ (one of them, if there are several!).

For manual computation you can just compute the fraction from time to time, and reuse its value for several iterations. You can also take the pair $x_{n - 1}, x_n$ so they always straddle a zero, thus guaranteeing it will converge to one in the initial range. Doing both of these does slow down convergence, however.