Convergence of fixed point iteration when $g'(p)=1$.

526 Views Asked by At

I am dealing with a function $f(x)=e^{-\frac{1}{x^2}}$, which has a root $p$ of infinite multiplicity at 0. I am struggling with the convergence rate of the resulting standard Newton fixed point iterate: \begin{align*} x_{n+1}&=x_n-\frac{e^{-\frac{1}{x_n^2}}}{2x_n^{-3}e^{-\frac{1}{x_n^2}}}\\ g(x_n)&=x_n-\frac{x_n^3}{2} \end{align*} I believe this works as an iterate in theory. i.e. it maps $[-1,1]$ to a subset of $[-1,1]$ and I can show that basin of attraction is $|x|<2$. However, heuristically it seems that this should never converge, considering the "spider plot" of $y=x$ and $g(x)$. (I can't post the image because I don't have enough rep). Basically $g'(x)\approx1$ anywhere near $p$ and $g'(p)=1\nless 0$.

Let's assume we start at $x_0=1$. Can someone offer an argument for how long it will take to converge to within a tolerance $\delta$ or an argument as to why it won't converge?

2

There are 2 best solutions below

0
On BEST ANSWER

It seems easier to understand the slow rate of convergence by transforming the process into an iteration on $y = 1/x$ and observing the divergence to infinity.

That is, with $y_n = 1/x_n$ the iteration can be rewritten as:

$$ y_{n+1} \gets y_n \left( 1 - \frac{1}{2y_n^2} \right)^{-1} = y_n\left( \frac{2y_n^2}{2y_n^2 - 1} \right) = y_n\left(1 + \frac{1}{2y_n^2 - 1}\right) = y_n + \frac{1}{2y_n - y_n^{-1}} $$

So for large $y_n$ we have $y_{n+1} - y_n \approx 0.5 y_n^{-1}$. One therefore expects that if $y$ were treated as a continuous function of index $n=t$, the behavior would be approximately that of the solution of differential equation:

$$ y'(t) = 0.5 y^{-1} $$

namely $y(t) = \sqrt{t + c}$.

On this basis we would predict that for $x = 1/y$ to reach $10^{-5}$ (equivalently, for $y$ to reach $10^5$) would require something like $10^{10}$ iterations.

2
On

Here's an image for you...

enter image description here

You know, it seems to me that it is converging, albeit slowly.

With fixed point iterations we know that the speed of convergence depends of the magnitude of $g'(x)$.

This can be shown by using the first order approximation of $g(x+\epsilon)=g(x)+\epsilon g'(x)$.

Eventually we get $\epsilon_{r+1}=g'(x)\epsilon_r$,

... so we are sure that the errors will decrease if $\left |g'(x)\right|<1$

This does not take into acount what happens if a first order approximation is invalid, nor does it say what will happen if $\left |g'(x)\right|\ge1$. It';s easy to assune that the method will diverge, but by no means certain.

So suppose we know that $g'(x)=1$.

Then $g(x+\epsilon)\approx g(x)+\epsilon g'(x)+\frac 12 \epsilon^2 g''(x)=g(x)+\epsilon +\frac 12 \epsilon^2 g''(x)$

Since $x_{r+1}=g(x_r)$

$x+\epsilon_{r+1}=g(x+\epsilon_r)$

$x+\epsilon_{r+1}=g(x)+\epsilon_r +\frac 12 \epsilon_r^2 g''(x)$

But at the root $g(x)=x$, so

$x+\epsilon_{r+1}=x+\epsilon_r +\frac 12 \epsilon_r^2 g''(x)$

$\epsilon_{r+1}=\epsilon_r +\frac 12 \epsilon_r^2 g''(x)$

What this means is that the error will increase if $g''(x)>0$ and decrease if $g''(x)<0$

You have $g(x)=x-\frac {x^3}2$

$g'(x)=1-\frac {3x^2}2$

$g''(x)=-3x$

So $g''(x)<0$ as required provided $x>0$.

In fact $\epsilon_{r+1}=\epsilon_r +\frac 12 \epsilon_r^2 g''(x)=\epsilon_r +\frac 12 \epsilon_r^2 (-3\epsilon_r)$

$\epsilon_{r+1}=\epsilon_r \left(1 -\frac 32 \epsilon_r^2 \right)$

Now to consider the number of steps required for convergence...

Suppose that after $n$ steps we have reached a point where $\epsilon_n=10^{-k}$ and that after another $m$ steps we will reach the point where $\epsilon_{n+m}=10^{-(k+1)}$.

We know that $\epsilon_{n+1}=\epsilon_n \left(1 -\frac 32 10^{-2k} \right)$

and that $\epsilon_{n+m+1}=\epsilon_{n+m} \left(1 -\frac 32 10^{-2(k+1)} \right)$

$\epsilon_{n} \left(1 -\frac 32 10^{-2k} \right)^m \le \epsilon_{n+m} \le \epsilon_{n} \left(1 -\frac 32 10^{-2(k+1)} \right)^m$

$\left(1 -\frac 32 10^{-2k} \right)^m \le \frac 1{10} \le \left(1 -\frac 32 10^{-2(k+1)} \right)^m$

$m \log \left(1 -\frac 32 10^{-2k} \right)\ge \log (\frac 1{10}) \ge m \log \left(1 -\frac 32 10^{-2(k+1)} \right)$

$\frac {\log (\frac 1{10}) }{ \log \left(1 -\frac 32 10^{-2k} \right)}\le m \le \frac {\log (\frac 1{10})}{ \log \left(1 -\frac 32 10^{-2(k+1)} \right)}$

Using Excel, I found the following:

It should take fewer than $1.54 \times 10^{10}$ steps to move from $\epsilon=10^{-4}$ to $\epsilon=10^{-5}$

It should take fewer than $1.54 \times 10^{12}$ steps to move from $\epsilon=10^{-5}$ to $\epsilon=10^{-6}$

It should take fewer than $1.54 \times 10^{14}$ steps to move from $\epsilon=10^{-6}$ to $\epsilon=10^{-7}$

Beyond here Excel starts to struggle with calculating the logarithms, but a clear patter is visible: it takes about 100 times as many steps to get the next decimal place.