Chebyshev differentiation matrices and Galerkin method

191 Views Asked by At

I've been reading the "Spectral Methods in Matlab" by Lloyd N. Trefethen and I'm interested in solving PDE's with spectral methods. From what I understand this book gives us a method (at least until chapter 6 "Chebyshev Differentiation Matrices") that I take it to be as follows:

Say we're trying to solve $u_{xx}=f(x)$. Then, what we do is to let $u(x)$ be approximated by some polynomial given by $p(x)=\sum_j^ny_i \Pi_{k\neq j}\bigg(\frac{x-x_j}{x_j-x_k}\bigg)$. Now if we let the evaluation points $\{ x_i\}$ be the Chebyshev nodes we get rid of the Runge phenomena. Knowing this it is "easy" to calculate some matrix $D_N$ that will give us the action of the differential operator and we have then reduced our ptoblem to a linear system of equations, and the later interpolation of the calculated $u_i$ will give me a very good aproximation to the exact solution of $u$.

Now I sort of understand this. But what we're doing at the end of the day is calculating the values of $u_i$ at the points $\{x_i\}$ that guarantee a very good interpolation. And this is fine. But what does it have to do with the Galerkin method? What I understand from the Galerkin method is that we let $u(x)\approx\sum_n^N \tilde{u}_n\phi_n(x)$ for some family of $\phi_i$ such that they respect the boundary conditions of the problem at hand and those same functions are to be described by some orthogonal basis of polynomials up to degree $N$. Our task now is to invoke the minimization of the residual $R=\tilde u_{xx}-s$ and after some working out we are tasked of finding the coefficients $\tilde u_n$. This has a very apparent difference to the first method described where we were tasked to find the values of $u$ at specific points $x_i$ which were meticulously chosen to minimize the error in the interpolation. But since in class we were first introduced to the Galerkin method (and also Tau and Colocation) and afterwards the other method by Trefethen without any bridge to make any conection I'm struggling to discover what that connection is, or if there's any connection at all. Can anyone give a hand?

PS: This is the reference that was used to have us introduced to the Galerkin method https://arxiv.org/pdf/gr-qc/0609020.pdf

1

There are 1 best solutions below

6
On

The goal of all approximation methods is to find accurate weights $\tilde{u}$ to minimize a special norm or metric:

$u(x)\approx\sum_n^N \tilde{u}_n\phi_n(x)$.

The Galerkin method can be assigned to a class of error minimization methods which is optimal in the sense of an energy $L_2$ based norm. Popular methods:

  1. Collocation methods or Interpolation
    • Minimization at specific points
    • Optimal in the sense of the Lebesgue constant $L_{\infty}$
  2. Least Squares or Galerkin methods
    • Minimization considering an integral norm
    • Optimal in the sense $L_{2}$ or similar norm

So to speak: The Galerkin and Least Squares methods look at the error on a complete interval rather than at the error at specific singular point on this interval. You can think of it as minimizing the area under a curve. Note that the math behind these methods is not so important for a good understanding.

Regards