Solving linear systems by preconditioned conjugate gradient algorithm

76 Views Asked by At

Let a linear system, AU=F, be arisen from the second order central finite difference approximation to a Poisson boundary value problem as follows

$$−\Delta u = f \in \ \Omega := (0,1) \times (0,1)$$

where $u = 0$ on the boundary of $\Omega$ and

$$f = 2 \pi^2 \sin(\pi x)\sin(\pi y)$$

The exact solution $u = \sin(\pi x) \sin(\pi y)$. The grid size $h = \frac 1n$ such that the grid points $(x_i, y_j)$ $with\ x_i=i h\ and\ y_j = j h\ for\ i, j=0,1,2,...,n.$ The grid points are then ordered from left to right and bottom to top to produce the linear system with A being symmetric positive definite.

Write a program that compute the linear system by the preconditioned conjugate gradient algorithm using the Jacobi preconditioning.

You then use your program to do all the numerical experiments using an initial guess of zero and the iteration rule: Terminate the iteration as soon as the relative residual norm

$$\| F − AU \|_2 < 10^{−5} \|F\|_2$$

I am unsure where to start on this problem in terms of setting up the matrix A, U and F. The whole intro confuses me. I understand the programming end of it but to set up the matrix does not make sense to me, any help would be much appreciated!

1

There are 1 best solutions below

6
On BEST ANSWER

You are trying to calculate the numbers $u_{i,j}$, where $0 < i,j < n$, where $u_{i,j}$ is your approximation to the value of the function $u$ evaluated at the corresponding point $(\frac{i}{n},\frac{j}{n})$.

Note that you do not need to calculate the values $u_{0,j}$, $u_{i,n}$, etc because you already know they are zero.

Because these are the numbers that you are trying to calculate, these are your unknowns. That is, $U$ is a column vector with $(n-1)^2$ elements. You can use either row-major or column-major order; it doesn't matter. But let's say it's this:

$$U = \begin{bmatrix}u_{1,1} \\ u_{2,1} \\ \vdots \\ u_{n-1,1} \\ u_{1,2} \\ \vdots \\ u_{n-1,n-1}\end{bmatrix}$$

It's important to work this out first, because now you can consider the linear system as a whole. You want the number of equations to equal the number of unknowns, otherwise the problem is either underdetermined or overdetermined. So $A$ should be a $(n-1)^2 \times (n-1)^2$ matrix, and $F$ should be a column vector with $(n-1)^2$ elements.

Moreover, think hard about the similarity between $AU = F$ and $-\Delta u = f$. We know that $U$ corresponds to $u$, and so if you squint at this really hard, you may be able to see that $A$ should correspond to $-\Delta$ (somehow) and $F$ should correspond to $f$.

Long story short, if $F$ is a column vector with $(n-1)^2$ elements and it corresponds to $f$, chances are very good that it's going to be $f$ evaluated at the lattice points:

$$F = \begin{bmatrix}f_{1,1} \\ f_{2,1} \\ \vdots \\ f_{n-1,1} \\ f_{1,2} \\ \vdots \\ f_{n-1,n-1}\end{bmatrix}\,\mathrm{where}\,\,f_{i,j} = f\left(\frac{i}{n},\frac{j}{n}\right)$$

So now, this just leaves $A$. We want to find a $(n-1)^2 \times (n-1)^2$ matrix $A$ such that $AU$ is a finite difference approximation to $-\Delta u$ evaluated at the corresponding points.

We know that:

$$\Delta u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}$$

The central finite difference approximation to the second derivative is:

$$\frac{\partial^2 f}{\partial x^2} \approx \frac{f(x - h) - 2 f(x) + f(x + h)}{h^2}$$

So:

$$\Delta u(x,y) \approx \frac{1}{h^2}\left( u(x - h,y) + u(x + h,y) + u(x, y - h) + u(x, y + h) - 4 u(x,y) \right)$$

That is:

$$-\Delta u_{i,j} \approx -\frac{1}{h^2} \left( u_{i-1,j} + u_{i+1,j} + u_{i,j-1} + u_{i,j+1} -4u_{i,j}\right)$$

Using the boundary conditions $u_{0,j} = u_{n,j} = u_{i,0} = u_{i,n} = 0$, this gives you enough information to construct $A$. For example, let's suppose that $n = 4$. Then you get the linear system:

$$-\frac{1}{4^2}\begin{bmatrix}-4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & -4 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -4 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & -4 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & -4 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 & -4 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & -4 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & -4 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & -4 \end{bmatrix} \begin{bmatrix}u_{1,1} \\ u_{2,1} \\ u_{3,1} \\ u_{1,2} \\ u_{2,2} \\ u_{3,2} \\ u_{1,3} \\ u_{2,3} \\ u_{3,3}\end{bmatrix} = \begin{bmatrix}f_{1,1} \\ f_{2,1} \\ f_{3,1} \\ f_{1,2} \\ f_{2,2} \\ f_{3,2} \\ f_{1,3} \\ f_{2,3} \\ f_{3,3}\end{bmatrix}$$

Finally, we need to talk about the constant factor $-\frac{1}{h^2}$. You see, to use conjugate gradient methods to solve this system, $A$ needs to be symmetric and positive definite. It's clearly symmetric, but is it positive definite? $A$ is a negative constant multiplied by a matrix whose diagonal terms are all negative, so this may not be immediately obvious to you.

I'll let you discover the answer for yourself, but long story short, if it turns out to be negative definite, negating both sides fixes the problem.