Numerical Stability of Fixed-Point Interation

295 Views Asked by At

The fixed-point iteration $x_{n+1} = \phi(x_n)$ for some Lipschitz-continuous function $\phi$ with Lipschitz-constant $L<1$ is one of the methods in numerical analysis to obtain a solution $x^*$ of the nonlinear system $y = f(x^*)$. How would I determine whether the fixed-point iteration is numerically stable? Or, in other words, how would I determine the stability in the sense of forward analysis, that is, the smallest number $\sigma$ which satisfies $$\Vert g( x) - \tilde g(\tilde x)\Vert\leq \kappa\sigma\epsilon$$, where $\tilde g$ is an algorithm that solves the problem (i.e. in that case the fixed-point iteration), $\kappa$ the relative condition of the problem (i.e. $\frac{\Vert x^*\Vert}{\Vert f(x^*)\Vert}\Vert\mathrm D f(x^*)^{-1}\Vert$), and $\epsilon$ is the machine precision.

Unfortunately, I am completely lost. One of the major confusions arises from the fact that I find very differently looking definitions of numerical stability...

1

There are 1 best solutions below

5
On

To come up with a useful stability concept, it is a good strategy to think about what could go wrong when solving a particular problem. You want to define stability by looking at the forward error. Thus, assuming that $x$ is the solution computed by the algorithm and $x^*$ the true solution, you want the error to be small, i.e., $$ \| x^* - x \| \ll 1 \,, $$ where "$\ll$" means "much smaller than". You are assuming that you are solving the problem with a fixed-point iteration: starting from an initial guess $x_0$, you compute a sequence via the update $$ x_{n+1} = \phi(x_n) \,. $$ Usually, you will stop your iteration when $x_{n+1} \approx x_n$, which is equivalent to $\phi(x_n) \approx x_n$. In other words, your iteration stops when $$ \| \phi(x_n) - x_n \| \ll 1 \,. $$ Now, does that mean that $x_n$ is close to the true solution? Not necessarily. Thus, to show stability you need to show that if $$ \| \phi(x) - x \| \le \mu $$ for some sufficiently small value of $\mu$, then $\| x^* - x \|$ is sufficiently small, e.g., by finding a constant $C$ independent of $x$ such that $$ \| x^* - x \| \le C \mu \,. $$ This is the first type of stability you want to investigate.

There is, however, another type of stability, which is concerned with the stability of the iteration itself. To guarantee the convergence of the iteration, you have assumed that the function $\phi$ is Lipschitz continuous, i.e., there exists a constant $L$ such that $$ \| \phi(x) - \phi(y) \| \le L \| x - y \| \,, $$ and you have assumed that $L < 1$. In reality, however, you compute $\phi$ via some algorithm $\tilde\phi$ which makes a certain error during the computation, i.e., there exists an error function $e$ such that $$ \phi(x) - \tilde\phi(x) = e(x) \,. $$ Thus, your actual iteration is computed using the update $$ x_{n+1} = \tilde\phi(x_n) = \phi(x_n) - e(x_n) \,, $$ and therefore, the iteration might diverge, because $\tilde\phi$ is not guaranteed to be Lipschitz continuous. Hence, to show convergence you want to make sure that $\tilde\phi$ is Lipschitz continuous with Lipschitz constant smaller than one. This can usually be done by showing that the error $e(x)$ is small enough such that $\tilde\phi$ is still Lipschitz continous with a constant slightly larger than $L$ but still smaller than one.