Distributional divergence of $\frac{x}{\|x\|}$.

166 Views Asked by At

I have the vector field \begin{align*} X:\mathbb R^d&\to\mathbb R^d\\ x&\mapsto\frac{x}{\|x\|} \end{align*} which is a differentiable vector field outside of the origin, and I am interested in its divergence. After some easy computation we get $$ \mathrm{div}(X)=\left\{\begin{array}{ll} 2\delta_0&d=1\\ \frac{d-1}{\|x\|}&d\geq 2. \end{array}\right. $$ The problem with the above computation is that the dimension 1 case is computed as a distributional derivative and the case of bigger dimension is computed classically. This is a problem because for the longest time I thought that $$ \mathrm{div}(X) $$ should have a Dirac delta in the origin in any dimension, and I still believe it has to have it.

The reasons I do believe it has a Dirac delta component in the origin is because for $X$ infinite integral curves start from the origin. To put it differently, if we were to look at $-X$, if one were to search for the solutions to the continuity equation for $-X$, i.e. $$ \partial \mu_t+\mathrm{div}((-X)\mu_t)=0 $$ one would see how mass accumulates at the origin.

Question 1): did I make a mistake in my computations and $\mathrm{div}(X)$ has a Dirac delta in the origin? Or more correctly, if my computation isn't wrong classically but is wrong distributionally, how would I carry out that computation? Because it still doesn't yield a singular part when I evaluate with polar coordinates.

Question 2): is my intuition wrong and in reality in dimension bigger than one there is never a Dirac delta in that computation even distributionally?

Any help or literature is appreciated. Maybe it is all because my differential geometry skills are very rusty.

3

There are 3 best solutions below

1
On BEST ANSWER

The divergence you wrote is correct distributionally as well. To see this, let $\phi$ be any test function, then using the definition of distributional divergence, dominated convergence, and the divergence theorem, we get \begin{align} \langle\text{div}(X),\phi\rangle&:=-\sum_{i=1}^n\langle X^i,\partial_i\phi\rangle\\ &=-\sum_{i=1}^n\int_{\Bbb{R}^n}X^i\partial_i\phi\,dV\\ &=-\lim_{\epsilon\to 0^+}\sum_{i=1}^n\int_{B_{\epsilon}(0)^c}X^i\partial_i\phi\,dV\tag{DCT}\\ &=\lim_{\epsilon\to 0^+}\sum_{i=1}^n\int_{B_{\epsilon}(0)^c}[(\partial_iX^i)\phi - \partial_i(X^i\phi)]\,dV\\ &=\lim_{\epsilon\to 0^+}\left[\int_{B_{\epsilon}(0)^c}\frac{n-1}{\|x\|}\phi\,dV +\int_{S_{\epsilon}(0)}\phi\,dA \right]\tag{$*$}. \end{align} Notice that in this last equality, I used the divergence theorem, and beware that the outward normal to the boundary of $B_{\epsilon}(0)^c$ actually points into the origin, i.e is $-X$, so this extra minus sign cancels the minus sign which is already present. Now, in the first term, $\frac{1}{\|x\|}$ is integrable in a neighborhood of the origin in $\Bbb{R}^n$ for $n\geq 2$, so we can use the dominated convergence theorem to say the first limit is $\int_{\Bbb{R}^n}\frac{n-1}{\|x\|}\phi\,dV$. For the second term, since $n\geq 2$, the 'surface area' of the sphere $S_{\epsilon}(0)$ grows like $\epsilon^{n-1}$, which vanishes as $\epsilon\to 0^+$. So, the fact that $\phi$ is a test function means the intergal over the sphere vanishes too. Hence, the final result is \begin{align} \langle\text{div}(X),\phi\rangle&=\int_{\Bbb{R}^n}\frac{n-1}{\|x\|}\phi\,dV+0= \left\langle\frac{n-1}{\|x\|},\phi\right\rangle. \end{align} Thus, even in the distributional sense, we have $\text{div}(X)=\frac{n-1}{\|x\|}$ for $n\geq 2$.


Note, the proof follows through up to $(*)$ even for $n=1$. To proceed further, note that if $n=1$, the first integral vanishes (this is just reflecting that $\frac{x}{|x|}$ is constantly equal to $\pm 1$ away from the origin, so the derivative vanishes there). In the 1-dimensional case, the "boundary sphere $S_{\epsilon}(0)$" really consists of a 2-point set $\{-\epsilon,\epsilon\}$, and the integral over this set just means adding the values of $\phi$ at these points $\phi(-\epsilon)+\phi(\epsilon)$. Now, taking the limit $\epsilon\to 0^+$, and using continuity of $\phi$, we get $2\phi(0)= \langle 2\delta_0,\phi\rangle$, as expected.

2
On

Function Away from the Origin

$\nabla\|x\|=\frac{x}{\|x\|}$. Therefore, $$ \begin{align} \nabla\cdot\frac{x}{\|x\|} &=\frac{\nabla\cdot x}{\|x\|}-x\cdot\frac{x}{\|x\|^3}\tag{1a}\\ &=\frac{d}{\|x\|}-\frac1{\|x\|}\tag{1b}\\ &=\frac{d-1}{\|x\|}\tag{1c} \end{align} $$ Explanation:
$\text{(1a)}$: $\nabla\cdot(av)=a\,\nabla\cdot v+v\cdot\nabla a$
$\text{(1b)}$: $\nabla\cdot x=d$
$\text{(1c)}$: simplify

Distribution Supported at the Origin

I had originally computed the part of the distribution supported at the origin using the Divergence Theorem, but while trying to justify its application to a non-$C^1$ function, it became evident that I ended up computing the integral of the divergence without the theorem.

Let us compute the divergence of $\frac{x}{(x\cdot x+\epsilon^2)^{1/2}}$, which is a $C^\infty$ approximation of $\frac{x}{\|x\|}$. $$ \begin{align} \nabla\cdot\frac{x}{(x\cdot x+\epsilon^2)^{1/2}} &=\frac{d}{(x\cdot x+\epsilon^2)^{1/2}}-x\cdot\frac{x}{(x\cdot x+\epsilon^2)^{3/2}}\tag{2a}\\ &=\frac{d-1}{(x\cdot x+\epsilon^2)^{1/2}}+\frac{\epsilon^2}{(x\cdot x+\epsilon^2)^{3/2}}\tag{2b} \end{align} $$ Explanation:
$\text{(2a)}$: $\nabla\cdot(av)=a\,\nabla\cdot v+v\cdot\nabla a$
$\text{(2b)}$: $x\cdot x=x\cdot x+\epsilon^2-\epsilon^2$

Since $\frac{d-1}{\|x\|}$ is locally in $L^1$, the integral of $\frac{d-1}{(x\cdot x+\epsilon^2)^{1/2}}$, the left summand from $\text{(2b)}$, over $B(0,r)$ vanishes uniformly as $r\to0$.

Consider the integral of the right summand from $\text{(2b)}$ over $B(0,r)$: $$ \begin{align} &\lim_{\epsilon\to0}\int_{B(0,r)}\frac{\epsilon^2}{(x\cdot x+\epsilon^2)^{3/2}}\,\mathrm{d}x\\ &=\omega_{d-1}\lim_{\epsilon\to0}\int_0^r\frac{\epsilon^2}{(t^2+\epsilon^2)^{3/2}}\,t^{d-1}\,\mathrm{d}t\tag{3a}\\ &=\omega_{d-1}\lim_{\epsilon\to0}\epsilon^{d-1}\int_0^{r/\epsilon}\frac1{(t^2+1)^{3/2}}\,t^{d-1}\,\mathrm{d}t\tag{3b}\\ &=\lim\limits_{\epsilon\to0}\left\{\begin{array}{} 2\,\frac{r}{\sqrt{r^2+\epsilon^2}}&\text{if $d=1$}\\ 2\pi\epsilon\left(1-\frac\epsilon{\sqrt{r^2+\epsilon^2}}\right)&\text{if $d=2$}\\ 4\pi\epsilon^2\left(\log\left(\frac{r+\sqrt{r^2+\epsilon^2}}{\epsilon}\right)-\frac{r}{\sqrt{r^2+\epsilon^2}}\right)&\text{if $d=3$}\\ \omega_{d-1}\epsilon^2\left[0,\frac{r^{d-3}}{d-3}\right]_\#&\text{if $d\ge4$} \end{array}\right.\tag{3c}\\[6pt] &=\left\{\begin{array}{} 2&\text{if $d=1$}\\[3pt] 0&\text{if $d\ge2$} \end{array}\right.\tag{3d} \end{align} $$ Explanation:
$\text{(3a)}$: convert to polar coordinates
$\phantom{\text{(3a):}}$ the "surface area" of $S^{d-1}$ is $\omega_{d-1}=\frac{2\pi^{d/2}}{\Gamma(d/2)}$
$\text{(3b)}$: substitute $t\mapsto t\epsilon$
$\text{(3c)}$: compute the integrals for $d=1,2,3$
$\phantom{\text{(3c):}}$ bound the integral for $d\ge4$
$\phantom{\text{(3c):}}$ where $[a,b]_\#$ represents a number in $[a,b]$
$\text{(3d)}$: evaluate the limits

In Conclusion

Putting together $(1)$ and $(3)$ gives $$ \nabla\cdot\frac{x}{\|x\|}=\left\{\begin{array}{} 2\delta(x)&\text{if $d=1$}\\ \displaystyle\frac{d-1}{\|x\|}&\text{if $d\ge2$} \end{array}\right.\tag4 $$

2
On

(This is in addition to the other two answers: here, we give the 'correct' generalisation of $\operatorname{sgn}'=2\delta_0$ to higher dimensions. Put another way, we are answering your "question 2)".)

Your intuition is not quite right; as you go up in dimension, $1/|x|$ should be thought of as less singular. This is e.g. what's behind the fact that $1/|x|$ is integrable once $d>1$. Instead, $1/|x|^d$ is the correct threshold for (local) integrability.

Similarly, one should instead guess that $\DeclareMathOperator{\mydiv}{div} \mydiv(x/|x|^{d}) \newcommand{\dd}{\text d} $ is the one with a delta for a (distributional) divergence. Indeed, $$\fbox{$\mydiv\bigg(\frac x{|x|^{d}}\bigg) = \omega_{d-1} \delta_0.$} $$

($\omega_{d-1}=\int_{|x|=1}\dd \sigma$ is the surface area of $\Bbb S^{d-1}$.) The first sign of hope is that for all $x\neq 0$, $$ \mydiv(x|x|^{-d})=\let\del\partial \del_i (x_i|x|^{-d}) = d|x|^{-d}+ x_i\cdot (-dx_i |x|^{-d-2}) = d|x|^{-d} - d|x|^{-d} =0. \tag{$\star$ }\label{1}$$ Now, from the definition of the distributional divergence, \begin{align} \left\langle\mydiv(x|x|^{-d}),\phi\right\rangle &= -\int_{\mathbb R^d}\frac{x}{|x|^d} \cdot \nabla \phi(x) \, \dd x \\ &= -\lim_{\epsilon\to 0}\int_{|x|>\epsilon}\frac{x}{|x|^d} \cdot \nabla \phi(x) \, \dd x \\ &= \lim_{\epsilon\to 0}\Bigg(\int_{|x|>\epsilon}\underbrace{\mydiv\Big(\frac{x}{|x|^d}\Big)}_{=0 \text{ by }\eqref{1}} \phi(x) \, dx - \int_{|x|=\epsilon} \frac{x\phi(x)}{|x|^d}\cdot n \,\dd \sigma(x) \Bigg)\\ &= \lim_{\epsilon\to 0}\int_{|x|=\epsilon} \frac{\phi(x)}{|x|^{d-1}}\,\dd \sigma(x) \end{align} since $n= -x/|x|$ is the outward normal to $\{|x|>\epsilon\}$. Performing the change of variables $y= x/\epsilon$, \begin{align}\left\langle\mydiv(x|x|^{-d}),\phi\right\rangle &= \lim_{\epsilon\to 0}\int_{|y|=1} \phi(\epsilon y)\,\dd \sigma(y) \\ &= \phi(0)\int_{|y|=1}\,\dd \sigma(y) + \lim_{\epsilon\to 0}\int_{|y|=1}(\phi(\epsilon y)-\phi(0)) \,\dd \sigma(y) \\ &=\langle \omega_{d-1}\delta_0,\phi\rangle, \end{align} where the limit is zero from the smoothness of $\phi$ (e.g. $|\phi(\epsilon y) - \phi(0)| \le \|\nabla \phi\|_{L^\infty} \epsilon$), showing the result.