How best to generalize finite difference Laplacian matrix from 1D to 2D (and beyond)

79 Views Asked by At

To numerically solve the Poisson equation in one dimension we might transform the problem into a linear system $A u = b$, where $A$ is a finite difference approximation of the Laplace operator, say

$$ A = \frac{1}{h^2} \begin{bmatrix} -2 & 1 & 0 & 0 \\ 1 & -2 & 1 & 0 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 1 & -2 \\ \end{bmatrix}, $$

with $h$ being the step size in our discretization. When working in two dimensions we can do something similar, but it takes a little more work to find a corresponding matrix equation. If $\{u_{ij}\}$ are the variables analogous to $\{u_i\}$ above, then a straightforward approach is to stack the columns of the matrix $[u_{ij}]$ into a vector

$$u^\text{2D} = [u_{11}, \dotsc, u_{m1}, u_{12}, \dotsc, u_{m2}, \dotsc, u_{1n}, \dotsc u_{mn}]^T,$$

and similarly for the right-hand side. One can then construct an $mn$-by-$mn$ matrix $A^\text{2D}$ to implement the finite difference approximation of the Laplace operator and write $A^\text{2D} u^\text{2D} = b^\text{2D}$.

The step of writing down $A^\text{2D}$ is somewhat tedious and inelegant, likely because the vector representation is less natural in the 2D context. I suspect there might be a way to construct $A^\text{2D}$ directly from $A$, perhaps as some kind of product of $A$ with itself. I have looked at the Kronecker product, but that does not seem to be quite correct. Perhaps there is also some better way to construct $u^\text{2D}$, other than stacking. Any ideas? Best would be if there is a straightforward procedure that also generalizes to 3D and beyond.

1

There are 1 best solutions below

6
On BEST ANSWER

As another comment notes, it is probably better to use some kind of matrix-free method.

That said, if you do want to construct the matrix, then you're on the right track with Kronecker products. Let $U$ denote the matrix corresponding to $u^{2D}$, so that $u^{2D} = \text{vec}(U)$, with vec denoting the vectorizaiton operator. Denote $f = \text{vec}(F) = A^{2D}u^{2D}$, so that $f \approx \Delta u$ with $u$ equal to zero on the boundary and where $f$ has its columns stacked in the same way as $u$. We have $$ f_{i,j} = \frac 1{h^2}( u_{i-1,j} + u_{i+1,j} + u_{i,j-1} + u_{i,j+1} -4u_{i+1,j}), $$ where $u_{ij} = 0$ whenever $i \in \{0,m+1\}$ and $j \in \{0,n+1\}$.

Let $B_n$ denote the $n \times n$ matrix $$ B_n = \pmatrix{0&1\\&\ddots&\ddots\\&&0&1\\&&&0}. $$ Verify that $$ F = \frac 1{h^2}[B_mU + B_m^\top U + UB_n + UB_n^T - 4U]. $$ We can rewrite this as $$ F = \frac 1{h^2}\left[(B_m + B_m^\top)U + U(B_n + B_n^\top) - 4U\right]. $$ Using the properties of vectorization, we have $$ \begin{align*} f &= \operatorname{vec}(F) = \operatorname{vec}\left(\frac 1{h^2}\left[(B_m + B_m^\top)U + U(B_n + B_n^\top) - 4U\right]\right) \\ & = \frac 1{h^2}\left[\operatorname{vec}[(B_m + B_m^\top)U] + \operatorname{vec}[U(B_n + B_n^\top)] - \operatorname{vec}[4U]\right] \\ & = \frac 1{h^2}\left[(I_n \otimes (B_m + B_m^\top))\operatorname{vec}(U) + ((B_n + B_n^\top) \otimes I_m)\operatorname{vec}(U) - 4 I_{mn}\operatorname{vec}(U) \right] \\ & = \frac 1{h^2}\left[(I_n \otimes (B_m + B_m^\top)) + ((B_n + B_n^\top) \otimes I_m) - 4 I_{mn} \right]\operatorname{vec}(U). \end{align*} $$ So with that, we find $$ A^{2D} = \frac 1{h^2}\left[(I_n \otimes (B_m + B_m^\top)) + ((B_n + B_n^\top) \otimes I_m) - 4 I_{mn} \right]. $$


Although it goes beyond that which can be straightforwardly derived from the properties of matrix multiplication, it can be shown that the natural generalization of the above works as expected. Denote $C_n = B_n + B_n^T$. If $U$ is an $m \times n \times p$ array, then we have $$ A^{3D} = \frac 1{h^2}\left[ (I_p \otimes I_n \otimes C_m) + (I_p \otimes C_n \otimes I_n) + (C_p\otimes I_m \otimes I_n) - 8I_{mnp}\right]. $$ More generally, if $U$ is an $m_1\times \dots \times m_k$ array, then $$ A^{kD} = \frac 1{h^2}\left(\left[\sum_{p=1}^k \bigotimes_{q=1}^k M_{pq} \right] - 2^k I_{m_1 \cdots m_k}\right), $$ where $$ M_{pq} = \begin{cases} C_{m_{k-q}} & p = k-q,\\ I_{m_{k-q}} & \text{otherwise.} \end{cases} $$ Here, the vectorization of the $k$-dimensional array $U$ can be expressed as $$ u^{kD} = \sum_{i_1 = 1}^{m_1}\cdots \sum_{i_k = 1}^{m_k} u_{i_1,\dots,i_k} (e_{i_k}^{(m_k)}\otimes \cdots \otimes e_{i_1}^{(m_1)}), $$ where $e_i^{(m)}$ denotes the $i$th column of $I_{m}$. Within this framework, one could show that $A^{kD}$ behaves as expected using the properties of the Kronecker product and the fact that $$ B_n e_k^{(n)} = \begin{cases} e_{k-1}^{(n)} & k > 1,\\ 0 & k = 1. \end{cases} $$