"Least Squares" of Dirac Delta?

457 Views Asked by At

It is well known that the first $N$ terms of a Fourier series of an even function $f$ corresponds to the least squares approximation of $f$ on $[-\pi,\pi]$ using the functions $S = \{1,\cos(x), \cos(2x),\dots,\cos(Nx)\}$.

The least squares method doesn't make sense for approximating the delta function, since

$$\int_{-\pi}^\pi(f(x)-\delta(x))^2\,dx$$

diverges. However, the Fourier series technique can approximate this distribution without issue. Moreover, one can populate the least-squares matrix with values when approximating the delta function. In what sense, then, is the Fourier series approximating the delta function? Is there a generalized "least-squares" that will quantify how close a function is to the delta function?

5

There are 5 best solutions below

0
On

Generally in Banach spaces $\|u\|=\sup_{\|l\|=1}|\langle l,u\rangle|$, which means that least squares solution realizes $\inf_u\sup_{\|l\|=1}|\langle l,u\rangle|$, where $l$ runs over the unit ball of the dual space. Just having $|\langle l,u_n\rangle|$ go to $0$ for any dual $l$ makes sense for Sobolev spaces to which $f(x)-\delta(x)$ belongs, and their duals, it even makes sense for more general locally convex spaces to which distributions belong. If one wants a single measure of deviation Fréchet metric can be used in many cases (when the space has a countable basis of seminorms).

$f(x)-\delta(x)$ belongs to $H^{-1}$, for example, from the standard Hilbert scale $H^k$. It is straightforward to see that there will be weak convergence of the Fourier series to it, i.e. if $l$-s are taken from $H^{1}$ (it may not converge by norm in $H^{-1}$ in all cases). Alternatively, one can use weak convergence of measures, their space is dual to $C^0$. Both convergences are metrizable (on bounded subsets), see e.g. Metrization of weak convergence of signed measures on MO.

2
On

You can think of the delta function and the function you're comparing it to as both being linear functionals on some fixed normed space, so they come equipped with a norm from that dual space. For example, although you cannot think of $\delta$ as a linear functional on $L^2$, it is a linear functional on the Sobolev space $H^1=W^{1,2}$, because Sobolev embedding implies that $H^1$ functions in one dimension are continuous functions. Thus you can measure the distance between any bounded linear functional on $H^1$ and $\delta$ through the norm on the dual of $H^1$ like this:

$$\| f - \delta \| := \sup_{\| g \|_{H^1}=1} |f(g)-\delta(g)|.$$

0
On

Suppose $$S_N(x) = \sum_{n=0}^N a_n \cos(nx) $$ and $g$ is an even function with period $2\pi.$

Suppose the coefficients $a_n,\, n=0,\ldots , N$ are so chosen that $$ \int_{-\pi}^\pi g(x)\cos(nx) \,dx = \frac 1 \pi \int_{-\pi}^\pi S_N(x) \cos(nx)\, dx \tag 1 $$ for $n=0,\ldots,N.$ If $(1)$ holds, then $S_N$ is the $N$th-degree Fourier series approximation to $g.$

If $g$ is the delta function, then the integral on the left in $(1)$ is $1.$ In the integral on the right every term integrates to $0$ except the $n$th term, and that integrates to $a_n.$ Therefore $a_n=1$ if $S_N$ is the $N$th-degree Fourier series approximation of the delta function.

There is a certain sense in which this minimizes the integral of the square of the discrepancy between the finite trigonometric series and $\delta.$ Suppose we let \begin{align} \delta(x) & = \sum_{n=0}^\infty \cos(nx) \\ S_N(x) & = \sum_{n=0}^N \cos(nx) \\ g(x) & = \sum_{n=0}^N b_n \cos(nx) \end{align} Then $$ g(x) - \delta(x) = \Big(g(x) - S_N(x)\Big) + \Big(S_N(x) - \delta(x)\Big) $$ and $$ \underbrace{(g(x) - \delta(x))^2} = \Big(g(x) - S_N(x)\Big)^2 + {} \underbrace{\Big(S_N(x) - \delta(x)\Big)^2} {} + \Big(\text{something whose integral is $0$}\Big) $$ The integrals of the two terms over the $\underbrace{\text{underbraces}}$ are infinite. But the first term on the right is finite and is made as small as possible, in fact is made equal to $0,$ by making $b_n=1$ for $n=0,\ldots,N.$ In other words, that portion of the integral of the square of the discrepancy that can be altered by altering the first $N$ coefficients is made as small as possible by making those coefficients equal to $1.$

(How, if possible, to make this logically rigorous is more than I will attempt to say at this point.)

3
On

In crude physical terms it is the Energy Principle expressed by Parseval's Theorem that tells you "how near/far" two signals are from each other. And that is quite intuitive.

If you take the frequency spectrum of the two signals, then its difference is the spectrum of the difference between the signals (the "error"). The sum/integral of the square of that is proportional to the energy/power of the "error" signal.

The delta function has a flat spectrum with unitary amplitude, and the set $\{\cos {(kx)}\quad |0 \le k \le N \}$ has a unit spectrum clipped at $N$.
The conclusion will be obvious.

2
On

In what sense, then, is the Fourier series approximating the delta function? Is there a generalized "least-squares" that will quantify how close a function is to the delta function?

Here are two possible answers:

  1. The Dirac Delta is not a function and you can't square it, so using the $L^2$-norm does not make sense. I think that the most natural space for the Dirac Delta is the space of (Radon) measures: The Dirac Delta acts on continuous functions and thus, is an element in the dual space of these, and the dual space of the continuous functions (vanishing on the boundary) is, by Riesz-Markov, the space of Radon measures. This gives a natural norm to work with, namely $$ \|f-\delta\| = \sup\{\int \phi d(f-\delta)\ :\ \|\phi\|_\infty\leq 1\} $$ where $\phi$ runs over continuous functions and $\int \phi d(f-\delta) = \int \phi f - \phi(0)$. This is the Radon-norm on the space of measures. Actually, solving minimization problems involving the Radon norm is (at least numerically) not that hard, since you end up with a convex-concave saddle point problem, in this case $$ \min_f\max_{\|\phi\|_\infty\leq 1}\int \phi f - \phi(0).$$ A generalization to for fancy "norms" has been linked in Conifold's answer. The Wasserstein-1-norm (also called dual Lipschitz norm) mentioned in the linked answers simple amount to the problem $$ \min_f\max_{\|\phi\|_\infty\leq 1, \|\phi'\|_\infty\leq 1}\int \phi f - \phi(0)$$ i.e., you maximize over Lipschitz continuous functions $\phi$ with Lipschitz constant $\leq 1$.

  2. If you insist on least squares, note that formally expanding the square gives $$ \|f-\delta\|_2^2 = \|f\|_2^2 - 2\langle f,\delta\rangle + \|\delta\|_2^2. $$ The latter term does not make sense, but we want to minimize over $f$ anyway, so it's value does not play a role. Thus, we can drop it. The term in the middle makes sense as soon as $f$ is continuous, and then it evaluates to $-2f(0)$ thus we can give a meaning to the least squares problem as $$ \min_f \|f\|_2^2 - 2f(0) $$ where, in your case, you minimize over linear combinations of cosines.