Studying iterative methods for solving(or approximating) linear equation systems, I came accross the following theorem$^1$:
Let the following be an iterative method: $$x^{(0)},\qquad known\\ x^{(k+1)}=Bx^{(k)}+g,\quad k=1,2,...$$
where B is the iteration matrix and g is a vector.
The iterative method converges, for any chosen initial approximation $x^{(0)}$, if and only if $\rho(B)\lt 1$, where $\rho(B) = max_{i=1,...,n} \left\{|\lambda_i| : \lambda_i \text{ is an eigenvalue of B}\right\}$
So, basically, the method converges, for any chosen initial approximation, iff the maximum absolute value of B's eigenvalues is less than 1.
My question
I'm not entirely sure how to interpret the theorem:
- For any chosen $x^{(0)}$: (the method converges $\Leftrightarrow \rho(B) \lt 1$)
- (The method converges for any chosen $x^{(0)}) \Leftrightarrow \rho(B) \lt 1$
So, if $\rho(B) \ge 1$, does the method always diverge(interpretation 1), or can it converge for some (but not all) initial approximation (interpretation 2)?
$^1$(translated by me, if some terms are a bit off please let me know)
Inverting the equivalence gives (recall how the quantifiers are negated):
Note that the inversion does not say anything about the existence of $x^{(0)}$ for which the method converges. Two scenarios are possible:
You might find an initial guess such that the method converges for some $x^{(0)}$. Note that for the error $e^{(k)}=x-x^{(k)}$, one has $e^{(k)}=Be^{(k-1)}=B^ke^{(0)}$, where $e^{(0)}$ is the error associated with the initial approximation $x^{(0)}$. The method hence converges if $e^{(0)}\in\mathcal{S}$, where $\mathcal{S}$ is some invariant subspace of $B$ corresponding to the eigenvalues of $B$ with the magnitudes strictly smaller than one.
If all eigenvalues of $B$ are in magnitude larger or equal to one, then the method does not converge for any initial guess (except the trivial case where $x^{(0)}=x$, that is, $e^{(0)}=0$).