Divergence in Definition of Laplace-Beltrami Operator

3.8k Views Asked by At

I am trying to derive an explicit formula for Laplace-Beltrami operator in global Cartesian coordinates for a special case of plane curve. I have found this article, and I would like to match their expression (6) for LB on a curve with the standard definition in terms of metric tensor.

According to formula $(6)$ in the paper, Laplace-Beltrami operator on plane curve can be written as

\begin{align} \Delta_{LB}\, u & = \Delta u + \kappa\,u_{n} - u_{nn} \\ & = \tag{$\star$} \Delta u + \kappa\,\vec{n}\cdot\nabla u - \vec{n}\cdot\nabla\left(\vec{n}\cdot\nabla u\right) \end{align}

  • $\,\vec{n}\,$ is unit normal vector,
  • $\,\kappa=-\nabla\cdot\vec{n}\,$ is curvature,
  • $\,u_{n} = \vec{n}\cdot\nabla u\,$ and $\,u_{nn} = \vec{n}\cdot\nabla \left(\vec{n}\cdot\nabla u\right)\,$ are first and second normal derivatives,
  • $\,\nabla u\,$ and $\,\Delta u\,$ are respectively gradient and Laplacian of $\,u\,$.

I am having troubles deriving $(\star)$ or matching it with metric tensor expression for LB operator

\begin{align}\tag{$\ast$} \Delta_{LB}\, u = \dfrac{1}{\sqrt{\left\lvert g\right\rvert}}\,\partial_i\,\Big(\sqrt{\left\lvert g\right\rvert} \,g^{ij}\,\partial_j \,u \Big) \end{align}

I can derive $(\star)$ from the Laplace-Beltrami expression $\,\Delta_{LB}\,u = \nabla_{s}\cdot\big(\nabla_{s}\,u\big)\,$ assuming surface divergence of a vector equals to the regular divergence of its projection to the curve.

This is a BIG assumption, and I do not know how to justify it. I will appreciate if someone could help me to justify my assumption, or to derive $(\star)$ without assumptions on (surface) divergence.


My attempt to derive $(\star)$: let $\,\nabla_{s}\,$, and $P$ denote surface gradient and projecting operator, then

\begin{align} \Delta_{LB}\, u & = \nabla_{s}\cdot\big(\nabla_{s}\,u\big) \stackrel{\color{red}{\huge ?}}{=} \nabla\cdot\big(\nabla_{s}\,u\big) \\ & = \nabla\cdot\big(P\;\nabla \,u\big) = \nabla\cdot\Big(\nabla\,u-\big(\vec{n}\cdot\nabla\,u\big)\,\vec{n}\Big) \\ & = \Delta\,u-\left(\nabla\cdot\vec{n}\right)\left(\vec{n}\cdot\nabla u\right)- \vec{n}\cdot\nabla\left(\vec{n}\cdot\nabla u\right) \\ & = \Delta u + \kappa\,u_{n} - u_{nn} \end{align}

2

There are 2 best solutions below

14
On BEST ANSWER

The surface gradient operator is defined as follows

$$\eqalign{ & \mathop \nabla \limits^s = \left( {{\bf{I}} - {\bf{n}} \otimes {\bf{n}}} \right).\nabla \cr & = {\bf{I}}.\nabla - \left( {{\bf{n}} \otimes {\bf{n}}} \right).\nabla \cr & = \nabla - \left( {{\bf{n}}.\nabla } \right){\bf{n}} \cr}\tag{1}$$

  • $\bf{n}$ is the unit normal vector
  • $\bf{I}$ is the second order identity tenor
  • $\otimes$ is the tensor product
  • $.$ is the scalar product

As you can see in $(1)$ we have subtracted the normal component of the $\nabla $ from it and hence the name surface gradient.

Use $(1)$ to derive your formula. Consider the following

$$\mathop \nabla \limits^s .{\bf{F}} = \left( {\nabla - \left( {{\bf{n}}.\nabla } \right){\bf{n}}} \right).{\bf{F}} = \nabla .{\bf{F}} - \left( {{\bf{n}}.\nabla } \right){\bf{n}}.{\bf{F}} = \nabla .{\bf{F}} - {\bf{n}}.\nabla \left( {{\bf{n}}.{\bf{F}}} \right)\tag{2}$$

Now, if you put ${\bf{F}} = \mathop \nabla \limits^s u$ you can have

$$\mathop \nabla \limits^s .\mathop \nabla \limits^s u = \nabla .\mathop \nabla \limits^s u - {\bf{n}}.\nabla \left( {{\bf{n}}.\mathop \nabla \limits^s u} \right)\tag{3}$$

but

$${\bf{n}}.\mathop \nabla \limits^s u = {\bf{n}}.\left( {\nabla u - \left( {{\bf{n}}.\nabla u} \right){\bf{n}}} \right) = {\bf{n}}.\nabla u - \left( {{\bf{n}}.\nabla u} \right)\left( {{\bf{n}}.{\bf{n}}} \right) = {\bf{n}}.\nabla u - {\bf{n}}.\nabla u = 0\tag{4}$$

and hence

$$\mathop \nabla \limits^s .\mathop \nabla \limits^s u = \nabla .\mathop \nabla \limits^s u\tag{5}$$

9
On

Ok I know this is an old topic but I got into the same issues and there are several inaccuracies above, including the fact that Eq. ($\star$) from the OP is wrong. The correct formula for the surface Laplacian is \begin{align} \nabla_s^2 f = \nabla f^2 - \kappa \frac{\partial f}{\partial n} - \pmb{n}^T H(f) \pmb{n}\tag{$\star\star$} \end{align} where $\pmb{n}$ is the normal unit vector of the surface, $\kappa=\nabla\cdot \pmb{n}$ is the mean curvature, $\frac{\partial}{\partial n} = \pmb{n}\cdot\nabla$, $H(f)=\left\{\frac{\partial^2 f}{\partial x_i\partial x_j}\right\}_{i,j=1}^3$ is the Hessian matrix of $f$, and $^T$ denotes matrix transpose. The thing that got me confused for a while is that some texts use the notation $\frac{\partial^2 f}{\partial n^2}$ for $\pmb{n}^T H(f)\pmb{n}$ above whereas most people would logically understand $\frac{\partial^2 f}{\partial n^2}$ to be $\frac{\partial}{\partial n}(\frac{\partial f}{\partial n})=\pmb{n}\cdot \nabla\left(\pmb{n}\cdot\nabla f\right)$. Unfortunately, these are not the same thing! Another source of confusion comes from the use of dyadic notation in such texts which I find unnecessary and unclear. A last source of confusion is that the definition of surface divergence is not often stated explicitly, or again using unclear dyadic notation. So let's start by clarifying what the definitions of surface gradient and surface divergence are.

The surface gradient of a scalar $f$ is the orthogonal projection of the full gradient onto the surface ($\pmb{n}$ is the unit normal): $$\pmb{\nabla}_s f = \left(\mathbb{1}-\pmb{n}\pmb{n}^T\right)\pmb\nabla f = \pmb{\nabla} f - \pmb{n}\pmb{n}^T \pmb\nabla f $$ and the surface divergence of a vector field $\pmb{F}$ is $$\pmb\nabla_s\cdot \pmb{F} = \pmb\nabla\cdot \pmb{F} - \pmb{n}^T(\pmb\nabla \boldsymbol{F}) \pmb n$$ where $\pmb\nabla\pmb F$ is the Jacobian matrix $\{\frac{\partial F_i}{\partial x_j}\}_{i,j=1}^3$ and $\pmb{n}\pmb{n}^T$ is the matrix $\left\{n_i n_j\right\}_{i,j=1}^3$ (and also the definion of $\pmb{n}\otimes\pmb{n}$). Some use the suggestive (dyadic) notation $\pmb{\nabla}_s\cdot \pmb{F} = \left(\mathbb{1}-\pmb{n}\otimes\pmb{n}\right)\cdot\pmb\nabla\cdot\pmb{F}$ because of its similarity with the surface gradient, but the meaning really is that I gave above.

A proof of the surface Laplacian formula ($\star\star$) is given in Xu & Xhao (2003). It follows directly from these definitions (I will drop the boldfaces for notation simplicity, but it's important to keep track of what is a scalar, a vector, and a matrix – I make no use of dyadic notation here):

\begin{align} \nabla_s^2 f = \nabla_s\cdot\nabla_s f = \nabla\cdot\left(\nabla f -nn^T\nabla f\right)-n^T\left(\nabla(\nabla f - nn^T \nabla f)\right)n \\=\nabla^2f\underbrace{-\nabla\cdot\big((n\cdot\nabla f) n\big)}_{A} - n^T H(f)n + \underbrace{n^T\nabla (nn^T\nabla f) n}_{B} \tag{1} \end{align} because $\nabla\nabla f=H(f)=\{\frac{\partial^2 f}{\partial x_i\partial x_j}\}_{i,j=1}^3$ (the Hessian matrix of $f$ is the Jacobian matrix of $\nabla f$). Using the product rule for the divergence $\nabla\cdot(a\pmb{n}) = a \nabla \cdot \pmb{n} + \nabla a \cdot \pmb{n}$, we get \begin{align*} A &= - (n\cdot \nabla f)\nabla \cdot n - \nabla(n\cdot\nabla f)\cdot n. \end{align*} The last term above is (using the product rule) \begin{align*} n\cdot\nabla( n\cdot \nabla f) = \sum_{i,j} n_i \frac{\partial}{\partial x_i}\left(n_j \frac{\partial f}{\partial x_j}\right) = \sum_{i,j}n_i n_j \frac{\partial^2 f}{\partial x_i\partial x_j} + \sum_{i,j} n_i \frac{\partial n_j}{\partial x_i}\frac{\partial f}{\partial x_j} = n^T H(f) n + n^T(\nabla n)^T \nabla f \end{align*} so that \begin{align*} A = - \kappa \frac{\partial f}{\partial n} - n^T H(f) n - n^T(\nabla n)^T \nabla f \tag{2} \end{align*} Similarly, use of the product rule gives \begin{align*} B &= \sum_{i,j} n_i n_j \frac{\partial}{\partial x_j}[n n^T \nabla f]_i \\&= \sum_{i,j,k}n_i n_j \frac{\partial}{\partial x_j}\left[(nn^T)_{ik} (\nabla f)_k\right] \\&= \sum_{i,j,k}n_i n_j \frac{\partial}{\partial x_j}\left[n_i n_k \frac{\partial f}{\partial x_k}\right] \\&= \sum_{i,j,k}n_i n_j \frac{\partial}{\partial x_j}(n_i n_k)\frac{\partial f}{\partial x_k} + \sum_{i,j,k} n_i^2 n_j n_k \frac{\partial^2 f}{\partial x_j \partial x_k} \end{align*} Because $||n||^2=\sum_i n_i^2=1$, the last sum gives $n^T H(f) n$. In the first sum, we use the product rule once more. One of the terms becomes proportional to $\sum_i n_i \frac{\partial n_i}{\partial x_j} = 0$ (which vanishes because $\frac{\partial}{\partial x_j}||n||^2=0$). The remaining term is $\sum_{i,j,k} n_i^2 n_j \frac{\partial n_k}{\partial x_j}\frac{\partial f}{\partial x_k} = \sum_{j,k} n_j \frac{\partial n_k}{\partial x_j}\frac{\partial f}{\partial x_k} = n^T (\nabla n)^T \nabla f$ so that \begin{align*} B = n^T(\nabla n)^T \nabla f + n^T H(f) n \tag{3} \end{align*} Substituting (2) and (3) in (1) gives ($\star\star$).