Iterative solving of linear systems and relationship to Jacobi/Gauss-Seidel

177 Views Asked by At

In a numerics lecture, we were presented both the Gauss–Seidel as well as the Jacobi method used to iteratively solve linear systems of equations of the form $Ax = b$, which I understand and am able to apply.

In the lecture, we were additionally presented with the following bit of information, which I don't understand the significance of:

The problem $Ax=b$ can be solved using a fixed-point iteration based on the Banach fixed-point theorem, by performing the following trivial transformations: $$Ax=b \Leftrightarrow 0 = (b-Ax) \Leftrightarrow 0 = \omega (b-Ax) \Leftrightarrow x = x+\omega (b-Ax)$$ where $w > 0$. This defines a iteration function $g$: $$g(x) = (I - \omega A)x +\omega b$$ The iteration sequence $x^{(n+1)} = g(x^{(n)})$ then converges against $x$. Such a $\omega$ can be found if $A$ is positive definite and symmetric.

My question is: How does this relate to the two methods I mentioned before? Are those methods special cases of this theorem? If not, is this new method something one can actually use to solve a problem in an exam? How can I find the $\omega$ needed for this to work?

1

There are 1 best solutions below

0
On BEST ANSWER

The Jacobi and Gauss-Seidel methods also come from rewriting the linear system as a fixed point problem, based in particular additive decompositions $A = L+D+U$. So the comment just provides an alternative way of obtaining an equivalent fixed point problem.