Computational complexity of Gauß-Seidel method

1.9k Views Asked by At

I'm currently studying iterative methods to solve equations of the form $Ax=b$

One method that is presented in my script is the Gauß-Seidel method where one step is defined as:

$$x^{(k+1)} = (L+D)^{-1}(b-Rx^{(k)})$$

with $L + D + R = A$, where $D$ is the diagonal of $A$. And $L$ and $R$ are the left-lower and right-upper part of the matrix A.

There is a statement which goes something like this:

One step of the Gauß-Seidel method is about as expensive as one Multiplication of a Vector with a Matrix.

But I don't see how this statement holds true. I know that the Inverse can be computed once at the start of the computation. However I see two multiplications of a Matrix with a Vector. The first is $R \cdot x^{(k)}$ and the second is $(L+D)^{-1} \cdot (b-Rx^{(k)})$.

Is the rational behind the statement that both multiplications are of a triangular Matrix which "accounts for half of a multiplication"?

Or did I miss something entierly different?

1

There are 1 best solutions below

0
On

If you look at it in equation form:

$$x_1^{(k+1)}=\frac{1}{a_{11}}(b_1-a_{12}x_2^{(k)}-a_{13}x_3^{(k)}-...-a_{1n}x_n^{(k)})\\ ...\\ x_i^{(k+1)}=\frac{1}{a_{ii}}(b_i-a_{i1}x_1^{(k+1)}-...-a_{i,i-1}x_{i-1}^{(k+1)}-a_{i,i+1}x_{i+1}^{(k)}...-a_{in}x_n^{(k)})\\ ...\\ x_n^{(k+1)}=\frac{1}{a_{nn}}(b_n-a_{n1}x_1^{(k+1)}-a_{n2}x_2^{(k+1)}-...-a_{n,n-1}x_{n-1}^{(k+1)})\\$$

It is really like the multiplication of a matrix by a vector.

Even in your matrix form, the multiplication by $R$ is a multiplication by an upper triangular matrix, which is about half the operation of a multiplication by a whole matrix. Then the multiplication by $(L+D)^{-1}$ is like a back substitution step in Gaussian elimination, which is equivalent as a multiplication by another half of the matrix. So combining together, one step is a multiplication by a matrix.