Stop criterium for fixed point methods

66 Views Asked by At

Let $g(x): \mathbb{R} \to \mathbb{R}$ define a fixed point method. We are interested in the points $\alpha$ s.t. $\alpha = g(\alpha)$.

Let $(x_n)_{n\in\mathbb{N}}$ be defined by $x_{n+1} := g(x_n)$. Suppose that $x_n \to \alpha \ (n \to +\infty)$.

How to prove the following?

If $g$ sufficiently smooth in a neighbourhood of $\alpha$, then

\begin{equation} \forall \varepsilon > 0 \ \exists m > 0 \ \big(( \ \forall n > m \ \vert x_n - x_{n-1} \vert < \varepsilon) \rightarrow \ (\exists C > 0 \ \forall n > m \ \vert x_n - \alpha\vert < C\varepsilon)\big). \end{equation}

In other words the theorem assures that we can safely estimate the error of an iterative fixed point method in a certain iteration, looking a the value $\vert x_n - x_{n-1}\vert$.

1

There are 1 best solutions below

1
On

Let $g$ be $C^{(1)}$ in a neighbourhood of $\alpha$. Furthermore we suppose $g'(\alpha) \neq 1$, otherwise more regularity is required. From Taylor we have \begin{equation} x-g(x) = \alpha - g(\alpha) + (x-\alpha) (1-g'(\alpha)) + o(x-\alpha). \end{equation} Substituting $x_n$ to $x$ we get \begin{equation} x_n - x_{n+1} = (x_n - \alpha)(1-g'(\alpha)) + o(x_n - \alpha). \end{equation} Then for \begin{equation} C := \left\Vert \frac{1}{1 - g'(\alpha)} \right\Vert_{\infty}, \end{equation} where the norm is taken in the interval in which the Taylor expansion is defined, we have the thesis.

We have also proved that $C$ is independent from $\varepsilon$.