Numerical - Optimization : Gauss-seidel implementation?

754 Views Asked by At

Reading through a book of numerical optimization the very first algorithm explained is the Gauss-Seidel one.

Suppose we want to solve the problem

$$ \min_{x \in \mathbb{R}^n} f(x) $$

Assume $f : \mathbb{R}^n \rightarrow \mathbb{R}$, assume it is also differentiabile, assume the method converge.

Let $g(x) = \nabla f(x)$, so this is a vector. The algorithm is explained as follows

  1. All coordinates are fixed, say to 0.
  2. The first coordinate is modified by solving the first equation with respect to this first coordinate.
  3. And so until $n$
  4. The process is repeated.

I'm trying to work out a more detailed procedure, based on that description, because I'm not sure I understand how it is supposed to be eventually implemented, though is a method of little interest.

This is what I've worked out

  1. $x^{(0)} = 0$
  2. $k \leftarrow 1$
  3. $\text{Repeat until convergence}:$
  4. $\;\;\; \text{For $j = 1 \ldots n$ do :}$
  5. $\;\;\;\;\;\; x^{(k)}_j \leftarrow \text{solve $g_j(x^{(k)}_1,\ldots,x_{j-1}^{(k)},x_{j}^{(k-1)},x_{j+1}^{(k-1)},\ldots,x_n^{(k-1)})$ respect to $x_j^{(k-1)}$}$

  6. $\;\;\;\ k \leftarrow k + 1$

Where $\text{convergence}$ is a test for the convergence. So in this algorithm I can apply to solve each $g_j$ any single variable algorithm to solve non linear equations, and I need also to choose the test (could the classic $||x^{(k)} - x^{(k-1)}||_2$ for example.