On the derivative of Dirac delta

140 Views Asked by At

Let $\delta(x)$ be the Dirac's delta, i.e. the "strange object" characterized by the two properties $\delta(x)=0$ for all $x\neq 0$ and $\int \delta(x) f(x)\text{ d}x= f(0)$ ($\int f$ is the integral over $\mathbb{R}$). Now, let's define the derivative of $\delta(x)$ as

\begin{equation*} \dot{\delta}(x)\triangleq \lim_{\tau \to 0} \frac{\delta(x+\tau)-\delta(x)}{\tau} \end{equation*} For simplicity, I'm considering the univariate case $x\in\mathbb{R}$. Now let's try to see what happens when we compute a weighted integral of $f(x)$ involving $\dot{\delta}(x)$. I would write naively \begin{equation*} \begin{aligned} \int \dot{\delta}(x) f(x)\text{ d}x&= \int \left(\lim_{\tau \to 0} \frac{\delta(x+\tau)-\delta(x)}{\tau}\right)f(x)\text{ d}x\\ &= \lim_{\tau \to 0} \frac{1}{\tau}\left[\int \delta(x+\tau)\,f(x)\text{ d}x - \int \delta(x)\,f(x)\text{ d}x\right]\\ &= \lim_{\tau \to 0} \frac{1}{\tau}\left[f(0-\tau) - f(0)\right]=-\dot{f}(0)\\ \end{aligned} \end{equation*} provided that we can push the limit sign outside the integral (if I'm not wrong, here Lebesgue help us to see when this can be done); the difference between the two integrals is not in the form $\pm \infty \mp\infty$; the derivative of $f(x)$ exists in $x=0$.

If I'm not missing anything else, we can say that, just like $\int \delta(x) f(x) \text{d}x=f(0)$, we have $\int \dot{\delta}(x) f(x)\text{ d}x=-\dot{f}(0)$.

Question

Since the $\delta(x)$ is a "strange object", I'm not 100% sure about my conclusions. So, my question is: am I right? If not, where I'm wrong?

1

There are 1 best solutions below

1
On

Your argument shows that the derivative of the delta distribution, let's denote it by $\delta',$ is given by a limit of difference quotients, i.e. your $\dot{\delta}.$

We need to know three things about distributions to make it rigorous:

  1. For a distribution $\phi,$ its derivative $\phi'$ is defined by the relation $$\int \phi' f:=-\int \phi f'$$ for any test function $f.$ The motivation behind this definition is that when $\phi$ is a (continuously differentiable) function, this identity is just integration by parts.
  2. The limit $\lim_{n\to \infty} \phi_n$ of a sequence of distributions $\phi_n$ is characterized by $$\int(\lim_{n\to \infty} \phi_n) f=\lim_{n\to \infty}\int \phi_n f$$ for any test function $f.$ This can either be taken as a definition of convergence for distributions or proven as a theorem.
  3. For a distribution $\phi$ and $\tau\in \mathbb{R}$ there is its shift $\phi(\cdot+\tau),$ which is again a distribution given by the identity $$\int \phi(\cdot+\tau) f=\int \phi f(\cdot-\tau)$$ for any test function $f.$ Similarly to 1., for functions this is just a change of variables.

Using these facts, your proof shows $$\delta'=\lim_{\tau\to 0} \frac{\delta(\cdot+\tau)-\delta}{\tau},$$ an equality of two distributions, which is exactly the result we would expect from the corresponding equality for functions.