When taking the derivative of a vector, I can use a matrix for that operation: $$d_x\vec{f}=\vec{A}_x\cdot\vec{f}$$ with $$A_x=\begin{pmatrix} -2& 1 & 0 & \cdots &0 & 0 & 0 \\ 1& -2 & 1 & & 0 & 0 & 0 \\ 0& 1 & -2 & &0 & 0 & 0 \\ \vdots& & & \ddots & & &\vdots \\ 0& 0 & 0& &-2 & 1 & 0 \\ 0& 0 & 0 & &1 & -2 & 1 \\ 0& 0 & 0 &\cdots&0 & 1 & -2 \end{pmatrix}\cdot\frac{1}{dx}$$ This also can be applied for a matrix, i.e. $$d_x\vec{B}=\vec{A}_x\cdot\vec{B}$$ But how can I do that if I want to calculate not only $d_x$, but also $d_y$ for $\vec{B}$? How can I then write $\vec{A}_y$ in $$d_x\vec{B}+d_y\vec{B}=\vec{A}_x\cdot\vec{B}+\vec{A}_y\cdot\vec{B}$$ ? I can of course write it as $$d_x\vec{B}+d_y\vec{B}=\vec{A}_x\cdot\vec{B}+\left(\vec{A}_x\cdot\vec{B}^\top\right)^\top$$ but I would prefer to have it in a single matrix.
Finite differences in multiple dimensions
552 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Based on the comment of M. Winter I could find a solution for my question (which maybe was worded badly):
When I have my matrix $\vec{B}$, I can write it either as a matrix or as a vector:
$$\begin{pmatrix}
1& 4 & 7 \\
2& 5 & 8 \\
3& 6 & 9
\end{pmatrix}\equiv\begin{pmatrix}
1\\
2\\
3\\
4\\
5\\
6\\
7\\
8\\
9
\end{pmatrix} $$
Using the right side, I can write my matrix for the finite difference in $x$-direction as
$$\vec{A}_x=\begin{pmatrix}
-2& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1& -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0& 1 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\
0& 0 & 0 & -2 & 1 & 0 & 0 & 0 & 0 \\
0& 0 & 0 & 1 & -2 & 1 & 0 & 0 & 0 \\
0& 0 & 0 & 0 & 1 & -2 & 0 & 0 & 0 \\
0& 0 & 0 & 0 & 0 & 0 & -2 & 1 & 0 \\
0& 0 & 0 & 0 & 0 & 0 & 1 & -2 & 1 \\
0& 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2
\end{pmatrix} $$
and in $y$-direction as
$$ \vec{A}_y=\begin{pmatrix}
-2& 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0& -2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0& 0 & -2 & 0 & 0 & 1 & 0 & 0 & 0 \\
1& 0 & 0 & -2 & 0 & 0 & 1 & 0 & 0 \\
0& 1 & 0 & 0 & -2 & 0 & 0 & 1 & 0 \\
0& 0 & 1 & 0 & 0 & -2 & 0 & 0 & 1 \\
0& 0 & 0 & 1 & 0 & 0 & -2 & 0 & 0 \\
0& 0 & 0 & 0 & 1 & 0 & 0 & -2 & 0 \\
0& 0 & 0 & 0 & 0 & 1 & 0 & 0 & -2
\end{pmatrix}$$
resulting in
$$\vec{A}=\frac{1}{2\cdot h}\left(\vec{A}_x+\vec{A}_y\right)$$
which solves my problem.
When understanding the problem as described in my comment, then this might be a solution:
Your matrix $A_x$ is applying a finite difference operator on a vector. When applying it to a matrix, it will act on each column of the matrix seperately. For a left-multiplied matrix, there is no way to work with values from different rows at the same time.
However, the lapace operator does split nicely between the dimensions, so you can instead apply the same matrix $A_x$ from the right to $B$, so that it works on the rows instead. You combine this to
$$A_xB+BA_x.$$
This is essentially the same as your way using transposed matrices. But as I said, there is no way to bring this in a single matrix, because a left-multiplied matrix can only act on columns, and a right-multiplied matrix can only act on rows.
As I also stated in my comment, it is quite uncommon to code a 2D-grid in a matrix (they are not initially made for this). Instead, you code such an $(n\times m)$-grid in a vector like this:
$$\vec f=(\underbrace{x_{1,1},..., x_{1,m}}_{\text{first row}},\underbrace{x_{2,1},..., x_{2,m}}_{\text{second row}},\cdots,\underbrace{x_{n,1},...,x_{n,m}}_{\text{last row}}).$$
Then again you can use a single matrix $A_{\Delta}$ which applied to $\vec f$ will give you the dicrete laplacian. But the matrix will not be of such a simple form as $A_x$. You will have lots of non-zero minor diagonals in there.