Dirac delta property: $f(x)\delta(x-x_0) = f(x_0)\delta(x-x_0)$

2k Views Asked by At

Suppose you want to prove that $$ f(x)\delta(x-x_0) = f(x_0)\delta(x-x_0) $$

In my homework, I was instructed to show that the integral of both sides of the equations will lead to the fact that the above statement is true.

My answer - why? Since when two functions with the same integral are necessarily equal? This almost never happens. What is different with the Dirac distribution?

4

There are 4 best solutions below

0
On BEST ANSWER

The expression "$ f(x)\delta(x-x_0) = f(x_0)\delta(x-x_0) $" is a shorthand.

It doesn't mean (in any ordinary sense) that

If $g$ and $h$ are functions $g(x) = f(x)\delta(x-x_0)$, $h(x) = f(x_0)\delta(x-x_0)$ then $g = h$ in that $g(x) = h(x)$ for all $x$.

To start, neither $g$ and $h$ are functions $\mathbb R \to \mathbb R$, because the delta function is not such a function either.

Instead, we can think of the delta function as a map from the space of functions over the reals--say the Lebesgue integrable functions-- to the reals, i.e., a function $L^1(\mathbb R) \to \mathbb R$. Write $\Delta_{x_0}$ for that function and define it as

$$\Delta_{x_0}(f) = f(x_0) \quad - (1)$$

Equivalently $\Delta_{x_0}(f(x)) = f(x_0)$. That is usually written as

$$\int_{-\infty}^{\infty} f(x) \delta(x - x_0) \ dx = f(x_0) \quad - (2)$$

This notation arises naturally from the idea that $\delta$ function is a limit of a sequence of probability density functions that in their limit put "infinite" weight on $x_0$ and zero elsewhere.

The integral notation can be finessed further by limiting integration to an interval $I \subset \mathbb R$, as mentioned in other answers. In which case

$$\int_I f(x)\delta(x - x_0) \ dx = \begin{cases} f(x_0), & x_0 \in I \\ 0, & x_0 \not\in I \end{cases}$$

Thus "$ f(x)\delta(x-x_0) = f(x_0)\delta(x-x_0) $" can be construed to mean

$$\Delta_{x_0}(f(x)) = \Delta_{x_0}(f(x_0)) \quad - (3)$$ where $f(x_0)$ is a constant function of value $f(x_0)$ everywhere.

Or to construe the expression in integral notation

$$\int_{-\infty}^{\infty} f(x)\delta(x-x_0) \ dx = \int_{-\infty}^{\infty} f(x_0)\delta(x-x_0) \ dx \quad - (4)$$

To demonstrate statement (3) using the definition (1), we have $$\Delta_{x_0}(f(x)) = f(x_0) \\ \text{ and } \quad \Delta_{x_0}(f(x_0)) = f(x_0) \ \text{, as } \Delta_{x_0}(c) = c \text{ for any constant function } c$$

That is $\Delta_{x_0}(f(x)) = \Delta_{x_0}(f(x_0))$.

For the record, almost no one uses the $\Delta$ notation or its equivalent in practice. The integral notation of equation (2) is the standard. In those terms

$$\int_{-\infty}^{\infty} f(x) \delta(x - x_0) \ dx = f(x_0) \quad\text{ and } \int_{-\infty}^{\infty} f(x_0) \delta(x - x_0) \ dx = f(x_0)$$

For any interval $I \subset \mathbb R$ which contains $x_0$,

$$\int_I f(x) \delta(x - x_0) \ dx = f(x_0) = \int_I f(x_0) \delta(x - x_0) \ dx$$

and if $x_0 \not\in I$, then

$$\int_I f(x) \delta(x - x_0) \ dx = 0 = \int_I f(x_0) \delta(x - x_0) \ dx$$

3
On

You can show that $$\int_a^b f(x)\delta(x-x_0) dx = \int_a^b f(x_0)\delta(x-x_0) dx$$

For all real $a$ and $b$.

If $a< x_0 <b$ you get $f(x_0)$ at each side. Otherwise $0$

0
On

The intuitive way to understand this is with the "informal" definition of the dirac delta: It's a "function" (Really a generalized function/distribution) with the property that it is 0 everywhere except at 0, and a sort of "infinity" at 0. The specifics of the definition of the infinity is such that $$\int _a ^b \delta (x)dx=1$$ if and only if $$o\in [a,b]$$.

i.e., it's a "unit impulse" at 0, which in normal functions doesn't make sense.

So, when we do $f(x)\cdot \delta (x-x_0)$, what we are really doing is isolating the single value at $x_0$, thus giving $f(x_0)$.

For more information, you'd have to go into functional analysis: We define $\delta$ to be the functional that takes a function $f$ that is $C^\infty$ with compact support to its value at 0, ie $\delta (f(x))=f(0)$

0
On

I am going to gloss over the subtleties of the dirac-delta function which isn't quite a function and get straight to the essential problem:

Theorem: if $\int_a^b f(x)\,dx = \int_a^b g(x)\,dx$, then $f$ and $g$ are equal (almost everywhere).

This is what I want to convince you is generally true. In order to do so, I'm going to begin with the following

Lemma: if $\int_a^b f(x)\,dx = 0$, then $f=0$ (almost everywhere).

Why is this true? A rigorous proof would require a better look at the notion of integration and exactly which functions we consider "integrable", but the essence is this: if $f$ is non-zero (by which I really mean essentially non-zero) at and around some point $c$, we can find $a$ and $b$ with $a<c<b$ such that $$ \int_a^b f(x)\,dx \neq 0 $$ since this never happens for the $f$ in the lemma, $f = 0$.

From there, we simply note that $$ \int_a^b f(x)\,dx = \int_a^b g(x)\,dx \iff \\ \int_a^b f(x)\,dx - \int_a^b g(x)\,dx = 0 \iff\\ \int_a^b [f(x) - g(x)]\,dx = 0 $$ so, if $\int_a^b f(x)\,dx = \int_a^b g(x)\,dx$ for all $a,b$, then $f - g = 0$, which is to say that $f = g$.