Divergence of fixed-point iteration for real starting values

312 Views Asked by At

Consider the linear system of equations $Ax = b$ with invertible $A\in \mathrm{GL}(n,\mathbb R)$ and $b\in\mathbb R^n$. For $A = M - N$ with invertible $M$ the solution $x_* = A^{-1}b$ is a fixed point of the function $$f(x) = M^{-1}Nx + M^{-1}b = Tx + c$$ with $T := M^{-1}N$ and $c := M^{-1}b$.

Thus, Banach's fixed-point theorem can be used to show that the sequence given by $x^{(k)} = Tx^{(k-1)} + c$ converges to $x_*$ for any starting value $x^{(0)} \in \mathbb R^n$ if and only if the spectral radius $\rho(T) < 1$.

I am currently trying to prove the "only if" part of the theorem above. This is relatively straightforward in the complex case: If $\rho(T)\ge 1$ there is an eigenvalue $\lambda \in \mathbb C$ with $|\lambda|\ge 1$ and a corresponding eigenvector $z \in \mathbb C^n$ such that $Tz = \lambda z$. Then for $x^{(0)} := z - x_*$ we can show that $$x^{(k)} - x_* = T^k(x^{(0)} - x_*) = T^kz = \lambda^k z$$ and therefore $\|x^{(k)} - x_*\| = |\lambda|^k\|z\|$ which does not converge to $0$ for $|\lambda|\ge1$.

But how can I show that there is a real starting point $x^{(0)} \in \mathbb R^n$ for which the iteration does not converge? In general, there is no real eigenvector correspoding to the eigenvalue $\lambda$ with $|\lambda| \ge 1$ so I don't know what to use as a starting point.

3

There are 3 best solutions below

1
On BEST ANSWER

If $u$ is a (complex) eigenvector for complex eigenvalue $\lambda$, write $u = x + i y$ where $x$ and $y$ are real. Now if $T^n x \to 0$ and $T^n y \to 0$ as $n \to \infty$, the same would be true of $T^n x + i T^n y = T^n u$, but this is not the case.

0
On

Take a consisten matrix norm: you know that $\rho(T)^n \le ||T^n||$

So taking the norm of the $n$-th iteration you get something which is greater than $\rho(T)^n$, which diverges as $n \to \infty$ if $\rho(T) > 1$

0
On

Assume $T$ has an eigenvalue $\lambda\in\mathbb{C}$ such that $\lambda\geq 1$ with the associated eigenvector $u$. It is easy to find a real starting vector if $\lambda$ is real or pure imaginary; so assume the "generic" case with $\lambda=\mu+i\nu$, where both real $\mu$ and $\nu$ are nonzero. Let $u:=x+iy$ where $x$ and $y$ are real vectors. Since $Tu=\lambda u$, we have by comparing real and imaginary parts that $$ Tx=\mu x-\nu y, \quad Ty=\nu x+\mu y. $$ If $U:=[x,y]$, we have hence $$ TU=U\pmatrix{\mu&\nu\\-\nu&\mu}=:UM, \quad T^nU=UM^n. $$ It can be shown that with $e_1=[1,0]^T$, $\|M^ne_1\|_2=|\lambda|^n$. Since $\|Bz\|_2\geq \sigma_{\min}(B)\|z\|_2$, where $\sigma_{\min}$ denotes the minimal singular value of the argument (the square root of the minimal eigenvalue of $B^TB$), we have $$ \|T^nx\|_2=\|T^nUe_1\|_2=\|UM^ne_1\|_2\geq\sigma_{\min}(U)\|M^ne_1\|_2=\sigma_{\min}(U)|\lambda|^n.$$ Hence with the initial error ($x^{(0)}-x_*$) equal (or proportional) to the real part of the eigenvector $u$, the iteration diverges. We can also choose the imaginary part $y$ instead, the proof is similar.