Is there a name for a biharmonic equation with an extra set of squared mixed partials?

221 Views Asked by At

I've run into the following PDE, defined on a rectangle: $$ \nabla^4 \psi + \alpha\:\partial^2_x\partial^2_y\psi = f(x, y) $$ where $\alpha$ is a constant. It takes on the following values at the boundaries: $$ \psi = 0 \\ \frac{\partial \psi}{\partial n} = 0 $$ with $\partial/\partial n$ denoting the derivative normal to the boundary. Every biharmonic equation solver I can find solves an equation of the form: $$ \nabla^4\psi + \alpha\nabla^2\psi + \beta \psi = f(x, y) $$ where $\alpha$ and $\beta$ are constants. I'm wondering whether the first PDE has a name so that I can investigate premade numerical solvers. If not, might there be a way to extend typical numerical biharmonic solvers to work with the extra mixed derivative term? I have been looking at finite difference solvers, but it's not clear to me how dependent on the specific details of the equation these algorithms are.

1

There are 1 best solutions below

6
On BEST ANSWER

This is not an answer to the original question. But it seems that some finite-difference solvers for biharmonic equation in rectangular domain are not hard to extend so they can handle this type of equation too.

This problem is attractive because it has constant coefficients and is defined on a rectangular domain. Such problems are often directly solvable by FFT methods. And if you google for direct solver for biharmonic equation on a rectangle you'll get a couple of papers. This one is particularly interesting. In addition to the method itself this work gives a great overview of existing methods for biharmonic problem in 2D.

Let's try to adapt one of them to your problem. First, let's rewrite the equation a bit $$ \left[ \partial_x^4 + (2 + \alpha) \partial_x^2 \partial_y^2 + \partial_y^4 \right] u = f(x, y), \qquad (x, y) \in G = [0, L_x] \times [0, L_y]\\ \left.u\right|_{G} = 0, \qquad \left.\frac{\partial u}{\partial n}\right|_{G} = 0. $$ Introduce a regular grid in $G$: $$ x_i = i h_x, \; i = -1, 0, \dots, N_x, N_x + 1, \quad h_x = \frac{L_x}{N_x}\\ y_j = j h_y, \; j = -1, 0, \dots, N_y, N_y + 1, \quad h_y = \frac{L_y}{N_y} $$ The nodes $i = -1$, $j = -1$, $i = N_x + 1$, $j = N_y + 1$ lie outside of the domain $G$. They allow to simplify the approximation of the boundary conditions $\left.\frac{\partial u}{\partial n}\right|_{G} = 0$.

The approximated boundary conditions take the form $$ u_{0,j} = u_{N_x,j} = u_{i,0} = u_{i,N_y} = 0\\ \frac{u_{1,j} - u_{-1,j}}{2h_x} = \frac{u_{N_x + 1,j} - u_{N_x - 1,j}}{2h_x} = 0\\ \frac{u_{i,1} - u_{i,-1}}{2h_y} = \frac{u_{i,N_y+1} - u_{i,N_y-1}}{2h_y} = 0\\ $$ Boundary condition express external (to domain $G$) unknowns via internal ones. The rest $(N_x - 1) (N_y - 1)$ values are unknown. Each internal node has an associated difference equation $$ \frac{u_{i-2,j} -4 u_{i-1,j} + 6u_{i,j} -4 u_{i+1,j} + u_{i+2,j}}{h_x^4} + \\ (2 + \alpha) \frac{u_{i-1,j-1} - 2u_{i-1,j} + u_{i-1,j+1} -2 u_{i,j-1} + 4u_{i,j} -2u_{i,j+1}+u_{i+1,j-1} - 2u_{i+1,j} + u_{i+1,j+1}}{h_x^2 h_y^2} + \\ \frac{u_{i,j-2} -4 u_{i,j-1} + 6u_{i,j} -4 u_{i,j+1} + u_{i,j+2}}{h_y^4} = f_{i,j} \equiv f(x_i, y_j) $$

Let's introduce some matrices: $$ \Lambda_2 = \begin{pmatrix} -2 & 1\\ 1 & -2 & 1\\ &\ddots&\ddots&\ddots\\ &&1 & -2 & 1\\ &&&1 & -2\\ \end{pmatrix} $$ $$ \Lambda_4 = \begin{pmatrix} 7 & -4 & 1\\ -4 & 6 & -4 & 1\\ 1 & -4 & 6 & -4 & 1\\ &\ddots&\ddots&\ddots&\ddots&\ddots\\ &&1 & -4 & 6 & -4 & 1\\ &&&1 & -4 & 6 & -4\\ &&&&1 & -4 & 7 \end{pmatrix} $$ They are obtained by eliminating external and boundary unknowns using boundary conditions from the finite difference operators of second and fourth derivative. Using these operators the difference problem might be written shorter: $$ \left[ \frac{1}{h_x^4} \Lambda_4 \otimes I + \frac{2 + \alpha}{h_x^2 h_y^2} \Lambda_2 \otimes \Lambda_2 + \frac{1}{h_y^4} I \otimes \Lambda_4\right] U = F $$ Here $A \otimes B$ stands for a direct product of operator $A$ acting along $x$ directions and $B$ acting along $y$ direction.

Consider discrete sine transform of the first kind which is a FFT-family transform. $$ x_n = \sqrt\frac{2}{N}\sum_{k=1}^{N-1} X_k \sin \frac{\pi kn}{N}, \qquad X_k = \sqrt\frac{2}{N}\sum_{k=1}^{N-1} x_n \sin \frac{\pi kn}{N}, \qquad $$ The matrix $\mathbb F$ of size $(N-1) \times (N-1)$ defined as $$ \mathbb F = \left[\sqrt\frac{2}{N} \sin \frac{\pi kn}{N}\right]_{kn} $$ diagnolizes $\Lambda_2$ and almost diagnolizes $\Lambda_4$. Also $\mathbb F^{-1} = \mathbb F^\top = \mathbb F$. The DST basis consists of functions $$ \psi^{(k)} = \sqrt{\frac{2}{N}}\begin{pmatrix} \sin \frac{\pi k}{N}& \sin \frac{2 \pi k}{N}& \dots& \sin \frac{(N-2) \pi k}{N}& \sin \frac{(N-1) \pi k}{N} \end{pmatrix}^\top. $$ $$ \Lambda_2 \psi^{(k)} = \lambda_k \psi^{(k)}\\ \lambda_k = -4 \sin^2 \frac{\pi k}{2N}. $$ Vectors $\psi^{(k)}$ are also eigenvectors for matrix $$ \Lambda_4^{C} = \begin{pmatrix} 5 & -4 & 1\\ -4 & 6 & -4 & 1\\ 1 & -4 & 6 & -4 & 1\\ &\ddots&\ddots&\ddots&\ddots&\ddots\\ &&1 & -4 & 6 & -4 & 1\\ &&&1 & -4 & 6 & -4\\ &&&&1 & -4 & 5 \end{pmatrix} $$ with eigenvalues $\mu_k = \lambda_k^2$. Denote $R = \Lambda_4 - \Lambda_4^C = \operatorname{diag}[2, 0, \dots, 0, 2]$.

The problem changes to $$ \left[ \underbrace{\frac{1}{h_x^4} \Lambda_4^C \otimes I + \frac{2 + \alpha}{h_x^2 h_y^2} \Lambda_2 \otimes \Lambda_2 + \frac{1}{h_y^4} I \otimes \Lambda_4^C}_{A,\text{ diagonalizable by }\mathbb F} + \underbrace{ \frac{1}{h_x^4} R \otimes I + \frac{1}{h_y^4} I \otimes R }_{B,\text{ low rank perturbation}} \right] U = F $$

Introduce $W = \mathbb F \otimes \mathbb F$ - the matrix of the 2D DST transform. $$ A = W D W, \quad D = \operatorname{diag}[d_{i,j}]\\ d_{i,j} = \frac{(\mu_x)_i}{h_x^4} + \frac{2 + \alpha}{h_x^2 h_y^2} (\lambda_x)_i (\lambda_y)_j + \frac{(\mu_y)_j}{h_y^4}. $$ If we didn't have the perturbation term the solution to $AU = F$ may be obtained in the following steps:

  1. Perform 2D DST of $F$.
  2. Divide obtained DST coefficients by $d_{i,j}$ elementwise.
  3. Perform inverse 2D DST (which is the same as 2D DST) and it would be the solution $U$.

The perturbation term makes things more complicated. One approach is using the Woodbury identity: $$ (A + VV^\top)^{-1} F = A^{-1} F - A^{-1} V (I + V^\top A^{-1} V)^{-1} V^\top A^{-1} F. $$ Here $VV^\top = B$ and $V = (V_1\; V_2)$ where $$ V_1 = \frac{\sqrt{2}}{h_x^2} \begin{pmatrix} 1&0\\ 0&0\\ \vdots&\vdots\\ 0&0\\ 0&1 \end{pmatrix} \otimes I, \quad V_2 = \frac{\sqrt{2}}{h_y^2} I \otimes \begin{pmatrix} 1&0\\ 0&0\\ \vdots&\vdots\\ 0&0\\ 0&1 \end{pmatrix}. $$

Matrices $V_1$ and $V_2$ practically extract $x$ and $y$ near to boundary values out of 2D array (up to scale factor).

The complexity of direct application of the Woodbury formula is in inversion of the capacitance matrix $I + V^\top A^{-1}V$. The simplest approach is to compute $(I + V^\top A^{-1}V)^{-1} r$ iteratively as a solution of $$ (I + V^\top A^{-1}V)s = r. $$ The matrix is s.p.d. so we might use conjugate gradients to compute the solution. The only function CG need is matrix-vector product, $s \mapsto (I + V^\top A^{-1}V)s$. This matrix-vector product is roughly as hard as computing $A^{-1} g$ which may be done by the algorithm for unperturbed system above.

Here is the implementation of this algorithm in python.