How can we prove if $A = M - N$ (non-singular) and if the induced matrix norm $\|M^{-1}N\| > 1$ then the iteration error increases at some point?

6 Views Asked by At

For a system Ax = b, it seems like I am on the right track in proving the proposed statement, however, I believe I'm missing something and my proof has some issues in the second last, and last equation. For some reason, the markup is not working on my post so I have added an image of my solution below.

Here is the image of the proof

Given that the induced norm

\begin{equation} \|\mathbf{M}^{-1}\mathbf{N}\| = \max_{\mathbf{x}\neq \mathbf{0}}{\frac{\|\mathbf{M}^{-1}\mathbf{Nx}\|}{\|\mathbf{x}\|}}>1. \end{equation}

We can state that

\begin{equation} {\|\mathbf{M}^{-1}\mathbf{N}\|}^{k} > {\|\mathbf{M}^{-1}\mathbf{N}\|}^{k-1}> \ldots > {\|\mathbf{M}^{-1}\mathbf{N}\|}. \end{equation}

Let $\( \mathbf{x} = \mathbf{x}_0 - \mathbf{x}^* \neq \mathbf{0} \)$ be the maximizer for equation 1 where $\( \mathbf{x}_0 \)$ is some initial value of the iteration.

Implying:

\begin{equation} \|\mathbf{M}^{-1}\mathbf{N}\| = \frac{\|\mathbf{M}^{-1}\mathbf{N}(\mathbf{x}_0 -\mathbf{x}^*)\|}{\|\mathbf{x}_0 - \mathbf{x}^*\|}>1. \end{equation}

Considering convergence equation ( (\mathbf{x}^* - \mathbf{x}_{k+1}) = \mathbf{M}^{-1}\mathbf{N}(\mathbf{x}^* - \mathbf{x}_k) ), however, ( (\mathbf{x}^* - \mathbf{x}k) = \mathbf{M}^{-1}\mathbf{N}(\mathbf{x}^* - \mathbf{x}{k-1}) ).

This implies,

[ (\mathbf{x}^* - \mathbf{x}_{k+1}) = (\mathbf{M}^{-1}\mathbf{N})^{k+1}(\mathbf{x}^* - \mathbf{x}_0). ]

[ \frac{(\mathbf{x}_{k+1} - \mathbf{x}^* )(\mathbf{x}_0 - \mathbf{x}^* )^{k}}{|\mathbf{x}_0 - \mathbf{x}^|^{k+1}} = \frac{[(\mathbf{M}^{-1}\mathbf{N})(\mathbf{x}^ - \mathbf{x}_0)]^{k+1}}{|\mathbf{x}_0 - \mathbf{x}^*|^{k+1}}.
]

Multiplying both sides by ( (\mathbf{x}_0 - \mathbf{x}^)^{k} ) and divide by ( |\mathbf{x}_0 - \mathbf{x}^|^{k+1} ). Where ( |\cdot| ) is a vector norm on ( \mathbb{R}^n )

[ \frac{|(\mathbf{x}_{k+1} - \mathbf{x}^* )(\mathbf{x}_0 - \mathbf{x}^* )^{k}}{|\mathbf{x}_0 - \mathbf{x}^*|^{k+1}} = {|\mathbf{M}^{-1}\mathbf{N}|}^{k+1} . ]

[ \implies \frac{|(\mathbf{x}_{k} - \mathbf{x}^* )(\mathbf{x}_0 - \mathbf{x}^* )^{k-1}}{|\mathbf{x}_0 - \mathbf{x}^*|^{k}} = {|\mathbf{M}^{-1}\mathbf{N}|}^{k}. ]

However, from equation 2, we know that ( {|\mathbf{M}^{-1}\mathbf{N}|}^{k+1} > {|\mathbf{M}^{-1}\mathbf{N}|}^{k} ).

[ \frac{|(\mathbf{x}_{k+1} - \mathbf{x}^* )(\mathbf{x}_0 - \mathbf{x}^* )^{k}}{|(\mathbf{x}0 - \mathbf{x}^*)^{k+1}|} > \frac{|(\mathbf{x}{k} - \mathbf{x}^* )(\mathbf{x}_0 - \mathbf{x}^* )^{k-1}}{|(\mathbf{x}_0 - \mathbf{x}^*)^{k}|}. ]

[ \implies |(\mathbf{x}{k+1} - \mathbf{x}^* )| > |(\mathbf{x}{k} - \mathbf{x}^* )| . ]

Therefore, choosing the starting condition as ( x_0 ) such that ( \mathbf{x}_0 - \mathbf{x}^* ) is the maximizing vector for induced matrix norm ( |\mathbf{M}^{-1}\mathbf{N}| ), we can deduce that the iteration is in fact non-convergent.