Integration by parts of vector fields (divergence, gradients)

375 Views Asked by At

I have the following system of equations in 3D:

$\boldsymbol{\Delta}\mathbf{u}(\mathbf{x})-\boldsymbol{\nabla}div\mathbf{u}(\mathbf{x})=\mathbf{F}(\mathbf{x})$

I'm solving for $\mathbf{u}(\mathbf{x})$ which is a vector of the following form:

$\mathbf{u}(\mathbf{x})=[u_1(\mathbf{x}),u_2(\mathbf{x}),u_3(\mathbf{x})]^T$

I want to solve this system using the finite elements method hence I have to write it in a weak form. I multiply (dot product) both side of the equations by the field $\mathbf{v}(\mathbf{x})$ and I integrate over the volume $V$.

Now since I have a Laplacian, I will have to do integration by parts and I'm not entirely sure if what I'm doing is correct, I'm not very good in vector calculus.

First I need to understand how we got to this system of equations. The professor told us that:

$div\Big[\big(\boldsymbol{\nabla}\mathbf{u}(\mathbf{x})-\boldsymbol{\nabla}\mathbf{u}^T(\mathbf{x})\big)\Big]=\boldsymbol{\Delta}\mathbf{u}(\mathbf{x})-\boldsymbol{\nabla}div\mathbf{u}(\mathbf{x})$

I'm getting a bit confused with the difference between the gradient and the divergence of a vector.

Back to the weak form, I came up with this but I'm not sure that it's right:

$div\Big[\big(\boldsymbol{\nabla}\mathbf{u}(\mathbf{x})\mathbf{v}(\mathbf{x})\big)\Big]=\boldsymbol{\Delta}\mathbf{u}(\mathbf{x})\mathbf{v}(\mathbf{x})+\boldsymbol{\nabla}\mathbf{u}(\mathbf{x}):\boldsymbol{\nabla}\mathbf{v}(\mathbf{x})$

Is this correct? And if yes how do I do the second part with the transpose?

1

There are 1 best solutions below

0
On BEST ANSWER

In general, the divergence reduces the dimensionality, while taking the gradient increases it. Example:

$$\text{div}\boldsymbol{u} = \nabla \cdot \boldsymbol{u} = \sum_i^N \partial_i u_i \in \mathbb{R}.$$ Thus, taking the divergence of a $N$-dimensional vector gives a scalar.

Applying the gradient to a vector is usually noted as $\nabla \boldsymbol{u}^T$, since in most conventions $\nabla$ is a column-vector like operator. Then you can think of applying $\nabla$ just as in a matrix-vector multiplication: $$\nabla \boldsymbol{u}^T = \begin{pmatrix} \partial_x \\ \partial_y \\ \partial_z \end{pmatrix} \begin{pmatrix}u_1 & u_2 & u_3 \end{pmatrix} = \begin{pmatrix} \partial_x u_1 & \partial_x u_2 & \partial_x u_3 \\ \partial_y u_1 & \partial_y u_2 & \partial_y u_3 \\ \partial_z u_1 & \partial_z u_2 & \partial_z u_3 \end{pmatrix}$$ Thus, taking the gradient of a $N$-dimensional vector gives you a $N\times N$ matrix.

For the identity: This is not correct as written down, since the dimensions of the term in the brackets can never match for vector-valued quantities. It looks like a take on the vector laplacian and the cross-product rule (for $\nabla$ treated as an ordinary vector) $$\nabla \times (\nabla \times \boldsymbol{u}) = \underbrace{\nabla \cdot}_{\text{div}} \Big(\boldsymbol{u} \nabla^T - \nabla \boldsymbol{u}^T \Big)$$ However, this is also not really correct since $\nabla$ is an operator which should be applied from the left, not right. And the sign is also not right, as far as I can see.

Coming to your last point: The formular you give is almost correct, a quick check of the dimensions would have given you that the first summand has to be $$ \Delta \boldsymbol{u} \color{red} \cdot \boldsymbol{v} $$ since both

  1. LHS is a scalar by reduction of the dimensionality through $\text{div} = \nabla \cdot $ of a vector ($\nabla \boldsymbol{u}$ is a matrix, mutliplied with the vector $\boldsymbol{v}$ gives a vector).

  2. Second summand on the RHS is also scalar since the double contraction $\colon$ of the matrices $\nabla \boldsymbol{u}, \nabla \boldsymbol{v}$ is defined as $\nabla \boldsymbol{u} \colon \nabla \boldsymbol{v} = \sum_{i,j}^N u_{ij} v_{ij}$.