Differentiating a Distribution

106 Views Asked by At

If a function $f\in C^1(\mathbb{R}^n)$ then if $\phi\in C_0^\infty(\mathbb{R}^n)$ we have that \begin{align*} \partial_j(\mathrm{test }\ f)(\phi) = -\mathrm{test}\ f(\partial_j\phi) = \mathrm{test}\ \partial_jf(\phi). \end{align*} This is easy to prove using partial integration. My question instead concerns differentiating functions which fail to be continuous at some points, say $f(x,y) = \log(x^2+y^2)$ and

I wonder what properties could be placed on the function (distribution) $f$ to ensure that

\begin{align*} \partial_j(\mathrm{test }\ f)(\phi) = \mathrm{test}\ \partial_jf(\phi). \end{align*} where differentiation $\partial_jf$ is the almost everywhere derivative. Clearly it is not enough to assume for instance that $f\in C^1(\mathbb{R}^n\setminus\{0\})$ since the Heaviside function $\mathbb{1}_{[0,\infty)}$ is a contradiction to this having derivative equal to delta. Considering the case $n=1$ I figured that if $f\in C^1(\mathbb{R}\setminus\{0\})$ the condition that $f,f'\in L^1_{loc}$ and

$$f(\epsilon)\phi(\epsilon)-f(-\epsilon)\phi(-\epsilon)\rightarrow 0$$ then implies that $\frac{d}{dx}\mathrm{test}\ f(\phi) = \mathrm{test}\ \frac{d}{dx}f(\phi)$ holds. Indeed then

\begin{align*} \frac{d}{dx}\mathrm{test}\ f(\phi) & = -\int_{\mathbb{R}}f(x)\frac{d}{dx}\phi(x)dx = \lim_{\epsilon\downarrow 0}-\int_{\mathbb{R}\setminus[-\epsilon,\epsilon]}f(x)\frac{d}{dx}\phi(x)dx\\ & = \lim_{\epsilon\downarrow 0}\left(-f(\epsilon)\phi(\epsilon)+f(-\epsilon)\phi(-\epsilon)+\int_{\mathbb{R}\setminus[-\epsilon,\epsilon]}\frac{d}{dx}f(x)\phi(x)dx \right)\\ & = \int_{\mathbb{R}}\frac{d}{dx}f(x)\phi(x)dx = \mathrm{test}\ \frac{d}{dx}f(\phi) \end{align*} However for greater dimensions this condition becomes too complicated. Is there some simple condition which could be placed on $f$ to ensure the above in $\mathbb{R}^n$?