Finite Element Method for the 1d wave equation

2.1k Views Asked by At

I'm solving the 1D wave equation \begin{equation} \frac{\partial^2 \eta}{\partial t ^2} - \frac{\partial^2 \eta}{\partial x ^2} = 0 \end{equation} with boundary conditions \begin{equation} \frac{\partial \eta}{\partial x} = 0 \qquad \qquad \text{on} \qquad \qquad x = 0, 1 \end{equation} using Finite Element Method. Weak formulation, where $w(x)$ is the weight function, yields: \begin{equation} \int_{0}^{1} w(x) \frac{\partial^2 \eta}{\partial t ^2} \mathrm{d}x - \int_{0}^{1} w(x) \frac{\partial^2 \eta}{\partial x ^2} \mathrm{d}x = 0 \end{equation} \begin{equation} \int_{0}^{1} w \frac{\partial^2 \eta}{\partial t ^2} \mathrm{d}x - \int_{0}^{1} \frac{\partial }{\partial x }\left( w \frac{\partial \eta}{\partial x }\right) \mathrm{d}x + \int_{0}^{1} \frac{\partial w}{\partial x } \frac{\partial \eta}{\partial x } \mathrm{d}x = 0 \end{equation} the middle integral vanishes through integration and use of B.C.'s Expressing $\eta$ and $w$ in terms of basis functions: \begin{align} \eta = \eta_j (t) \phi_j(x) \\ w = \phi_i(x) \end{align} ($\phi_i(x)$ are hat functions in my case) gives \begin{equation} \left(\int_{0}^{1} \phi_i \phi_j \mathrm{d}x \right) \frac{d^2 \eta_j}{d t ^2} + \left(\int_{0}^{1} \frac{d \phi_i}{d x } \frac{d \phi_j}{d x } \mathrm{d}x \right)\eta_j= 0 \end{equation} which can be written in matrix form as follows \begin{equation} M_{ij} \frac{d^2 \eta_j}{d t ^2} = - S_{ij} \eta_j \end{equation}

TASK: Introduce an auxiliary variable $p_j = \frac{d \eta_j}{d t }$, using Crank-Nicolson time discretization formulate the final system $A \mathbf{x} = \mathbf{b}$. Introducing the auxiliary variable gives rise to a system of first order equations in time: \begin{align*} \frac{d \eta_j}{d t } &= p_j \\ M_{ij} \frac{d p_j}{d t} &= - S_{ij} \eta_j \end{align*} Using C-N time discretization: \begin{align*} \frac{\eta_j^{n+1} - \eta_j^{n}}{\Delta t } &= \frac{1}{2} (p_j^{n+1}+p_j^{n} )\\ M_{ij} \frac{p_j^{n+1} - p_j^{n}}{\Delta t } &= -\frac{1}{2} S_{ij} \left(\eta_j^{n+1} + \eta_j^{n} \right) \end{align*} Here's where I got stuck. I tried eliminating $\eta_j^{n+1}$ and $p_j^{n+1}$ from right hand sides of above equations and obtained: \begin{align} \left[M_{ij} + \left(\frac{\Delta t}{2} \right)^2 S_{ij} \right] p_j^{n+1} &= \left[M_{ij} - \left(\frac{\Delta t}{2} \right)^2 S_{ij} \right] p_j^{n} - \Delta t S_{ij} \eta_j^{n} \\ \left[M_{ij} + \left(\frac{\Delta t}{2} \right)^2 S_{ij} \right] \eta_j^{n+1} &= \left[M_{ij} - \left(\frac{\Delta t}{2} \right)^2 S_{ij} \right] \eta_j^{n} + \Delta t M_{ij} p_j^{n} \end{align} Out of these I can form a single matrix equation, or just implement as they are. However, neither approach has worked - the numerical solution does not make much sense.

Questions: Have missed something? Is my method valid? Does anybody have experience/advice on solving these kind of problems numerically? Thanks.

2

There are 2 best solutions below

0
On BEST ANSWER

I think I found a way...

Rearranging the Crank-Nicolson formulation as follows: \begin{align*} \eta_j^{n+1} - \frac{\Delta t }{2} p_j^{n+1} &= \eta_j^{n} + \frac{\Delta t }{2} p_j^{n} \\ M_{ij} p_j^{n+1} +\frac{\Delta t}{2} S_{ij} \eta_j^{n+1} &= M_{ij} p_j^{n} -\frac{\Delta t}{2} S_{ij} \eta_j^{n} \end{align*}

and solving the matrix form: \begin{equation} \begin{bmatrix} I_{(N+1)\times(N+1)} & - \frac{\Delta t }{2} I_{(N+1)\times(N+1)} \\ \frac{\Delta t }{2} S & M \end{bmatrix} \begin{bmatrix} \mathbf{\eta}^{n+1} \\ \mathbf{p}^{n+1} \end{bmatrix} = \begin{bmatrix} I_{(N+1)\times(N+1)} & \frac{\Delta t }{2} I_{(N+1)\times(N+1)} \\ -\frac{\Delta t }{2} S & M \end{bmatrix} \begin{bmatrix} \mathbf{\eta}^{n} \\ \mathbf{p}^{n} \end{bmatrix} \end{equation} where $I_{(N+1)\times(N+1)}$ is an $(N+1)\times(N+1)$ identity matrix, produces sensible results!

0
On

The approach I am most used to solve these types of problems would be to rewrite it as a linear optimization problem, replacing the equality to zero with minimizing the 2-norm: $$\underset{\bf v}{\min}\{ \|({\bf D_t}^2-{\bf D_x}^2) {\bf v}\|_2^2 + {\bf C_b}\|{\bf D_xv}\|_2^2\}$$ where $\bf v$ is a vectorization of data points, $\bf D_t$ is a matrix representation of time derivative, $\bf D_x$ is a matrix representation of x-derivative, $\bf C_b$ is a diagonal matrix positive values for the boundary condition data points and (close to) 0 otherwise. What may be required here is to add a regularization of some sort to make the resulting matrix equation system non-degenerate ( to ensure it has a unique solution ). What is usually the case when one gets strange results with this method is that the regularization is not good enough. Maybe the same can happen with your numerical schemes? When you have the matrix equations explicitly you can measure condition numbers and other things which could have a large effect on numerical stability.