All the derivatives of distributions are also distributions, but what about the converse?

699 Views Asked by At

Say you have some linear functional $f$ well defined on $\mathscr{D}(\mathbb{R})$: then what if for some test function $\phi$ you have $$ -f(\phi') = g(\phi)? $$

If that $g$ defines a distribution, can you say that $f$ is also a distribution?

It would help me solve so many problems much faster.

Intuitvely I'd say yes, because if $g$ is a distribution then we have that $f(\phi'')$ is also a 'distribution' (meaning if every test function could be written as the second derivative of a test function then $f$ would be a distribution) which I'm not very sure of, main reason I came here to post this question.

Comment, clarification and counter examples will all be greatly appreciated. thanks!

1

There are 1 best solutions below

3
On BEST ANSWER

The answer to the question you are asking is yes: it is a standard property of distributions of one variable.

Precisely, the following theorem is a standard fact of the general theory of distributions:
Theorem ([1], §1.5.3. pp. 26-27) Equation \eqref{1} $$ f^\prime=g\label{1}\tag{1} $$ has a solution $f\in\mathscr{D}^\prime$ for all distributions $g\in\mathscr{D}^\prime$.
Sketch of proof. We have that \eqref{1} is equivalent to the following equation $$ (f,-\varphi^\prime)=(f^\prime,\varphi)=(g,\varphi)=\Bigg( g,\int\limits_{-\infty}^x \varphi^\prime(y)\,\mathrm{d}y \Bigg)\quad \forall\varphi\in\mathscr{D}(\Bbb R). $$ Thus the functional $f$ is already defined on any test function $\varphi$ which is the derivative of some other test functions: now define $$ \varphi_o(x)=\varphi(x)-\omega(x)\int\limits_{-\infty}^{+\infty}\varphi(x)\,\mathrm{d}x \quad \forall \varphi\in\mathscr{D}(\Bbb R),\label{p}\tag{P} $$ where $\omega(x)$ is an arbitrary test function such that $$ \int\limits_{-\infty}^{+\infty}\omega(x)\,\mathrm{d}x=1. $$ It can be proved that $\varphi_o$ is a test function which is the derivative of another test function, and that if we choose a sequence $\{\varphi_\nu\}_{\nu\in\Bbb N}$ covering to $0$ in $\mathscr{D}(\Bbb R)$, then the sequence $\{\varphi_{o\nu}\}_{\nu\in\Bbb N}$ associated to it by the projection \eqref{p} converges to to $0$ in $\mathscr{D}(\Bbb R)$.
Now, defining the antiderivative functional $g^{(-1)}$ as $$ \big(\,g^{(-1)},\varphi\big)=-\Bigg( g,\int\limits_{-\infty}^x \varphi_o(y)\,\mathrm{d}y \Bigg)\quad \forall\varphi\in\mathscr{D}(\Bbb R),\label{a}\tag{A} $$ the linearity of such functional is an immediate consequence of the linearity of the integral and of the distribution $g$. The continuity follows from the continuity of the map $$ \varphi(x)\mapsto\int\limits_{-\infty}^x \varphi_o(y)\,\mathrm{d}y. $$ Precisely, as said above, for any sequence of test functions $\{\varphi_\nu\}_{\nu\in\Bbb N}$ converging to $0$ in $\mathscr{D}(\Bbb R)$ the sequence $\{\varphi_{o\nu}\}_{\nu\in\Bbb N}$ converges to $0$ in $\mathscr{D}(\Bbb R)$ therefore $$ \begin{split} \varphi_{o\nu}\underset{\nu\to\infty}{\longrightarrow} 0 \; &\implies \int\limits_{-\infty}^x \varphi_{o\nu}(y)\,\mathrm{d}y\underset{\nu\to\infty}{\longrightarrow} 0 \\ \frac{\mathrm{d}^k\varphi_{o\nu}}{\mathrm{d}x^k}\underset{\nu\to\infty}{\longrightarrow} 0 \; &\implies \frac{\mathrm{d}^k}{\mathrm{d}x^k}\int\limits_{-\infty}^x \varphi_{o\nu}(y)\,\mathrm{d}y=\frac{\mathrm{d}^{k-1}\varphi_{o\nu}}{\mathrm{d}x^{k-1}}\underset{\nu\to\infty}{\longrightarrow} 0\quad k\in\Bbb N \end{split}\text{in }\mathscr{D}(\Bbb R) $$ thus $\Big\{\int\limits_{-\infty}^x \varphi_{o\nu}(y)\,\mathrm{d}y\Big\}_{\nu\in\Bbb N}$ is a sequence of functions in $\mathscr{D}(\Bbb R)$ coverging to $0$ in of $\mathscr{D}(\Bbb R)$.
This implies that the antiderivative \eqref{a} of a a distribution is a distribution: then $$ f=g^{(-1)} +C \label{2}\tag{2} $$ where $C$ is an arbitrary constant is the sought for primitive of $f$. $\blacksquare$

Notes

  • The development offered above is only a sketch of the standard proof offered by Shilov, since the rigorous proof justifying all steps, particularly the properties of $\varphi_o(x)$, while entirely elementary, is nevertheless not too short.
  • Vladimirov ([1], §2.2 pp. 27-29) offers a similar proof of the solvability of \eqref{1} in $\mathscr{D}^\prime$: he works in $\mathscr{D}^\prime\big(]a,b[\big)$ for arbitrary $a,b\in\overline{\Bbb R}$ with $a<b$ (i.e. including $a=-\infty$ and $b=+\infty$).
  • Erik Talvila used the solution of \eqref{1} to define a generalized integral which is more general of the ones o Henstock and Kurzweil. Recently also Ricardo Estrada and Jasson Vindas proposed a definition of distributional integral based on a similar concept.

[1] G. E. Shilov (1968), Generalized functions and partial differential equations, Mathematics and Its Applications, Vol. 7, (English) New York-London-Paris: Gordon and Breach Science Publishers, XII+345, MR0230129, Zbl 0177.36302.

[2] V. S. Vladimirov (2002), Methods of the theory of generalized functions, Analytical Methods and Special Functions, Vol. 6, London–New York: Taylor & Francis, pp. XII+353, ISBN 0-415-27356-0, MR2012831, Zbl 1078.46029.