What is the divergence of a distribution?

1.8k Views Asked by At

Let

  • $d\in\mathbb N$
  • $\Omega\subseteq\mathbb R^d$ be open
  • $\mathcal D(\Omega):=C_c^\infty(\Omega)$

If $p\in \mathcal D'(\Omega)$, then $$\frac{\partial p}{\partial x_i}(\phi):=-p\left(\frac{\partial\phi}{\partial x_i}\right)\;\;\;\text{for }i\in\left\{1,\ldots,d\right\}\text{ and }\phi\in\mathcal D(\Omega)$$ and $$\nabla p(\phi):=\sum_{i=1}^d\frac{\partial p}{\partial x_i}(\phi_i)\;\;\;\text{for }\phi\in\mathcal D(\Omega)^d\;.$$

Is there some notion of the divergence of a distribution too?

2

There are 2 best solutions below

16
On BEST ANSWER

In distribution theory, many notions are generalizations of what happens in space $L^1_{loc}(\Omega)$. The distributional derivative is an operator of type $D^\alpha : \mathcal{D}'(\Omega) \longrightarrow \mathcal{D}'(\Omega)$, its restriction on $L^1_{loc}(\Omega)$ is what many authors call weak derivative, in this sense, weak and distributional derivative are the same thing. That said, you just know what is weak gradient to know what is the distributional gradient. The divergence follows then by an appropriate scalar product.

0
On

If $\mathcal D (\Omega, \Bbb R^d)$ is the space of vector-valued test functions, there is a topology on it, very similar to the Schwartz topology on $\mathcal D (\Omega)$, that makes it a locally-convex topological linear space. It makes sense, then, to consider its topological dual, $\mathcal D ' (\Omega, \Bbb R^d)$, the elements of which are called vector-valued distributions. Formally, $\mathcal D ' (\Omega, \Bbb R^d) = \underbrace {\mathcal D ' (\Omega) \oplus \dots \oplus \mathcal D ' (\Omega)} _{d \text{ times}}$, in the topological sense.

To begin with, let $p : \Omega \to \Bbb R^d$ be a smooth vector-valued map. Then $\text{div } p$ is a smooth function, which we may view as a distribution, and its action on a vector-valued test function $\varphi$ is

$$\langle \text{div } p, \varphi \rangle = \left< \sum _i \partial_i p_i, \varphi \right> = \sum _i \langle \partial_i p_i, \varphi \rangle = \sum _i \int (\partial_i p_i) \varphi = \\ \sum _i \int \partial_i (p_i \varphi) - \sum _i \int p_i \partial_i \varphi = \sum _i 0 - \int \sum _i p_i \partial_i \varphi = - \int \langle p, \nabla \varphi \rangle = - \langle p, \nabla \varphi \rangle ,$$

where the last $\langle \cdot, \cdot \rangle$ means evaluation of a distribution on a test function, and the $\langle \cdot, \cdot \rangle$ immediately preceding it is the scalar product in $\Bbb R^d$.

This justifies defining the divergence of a vector-valued distribution $p$ as $\langle \text{div } p, \varphi \rangle = - \langle p, \nabla \varphi \rangle$.