Understanding Proof of Poisson Summation Formula

4.6k Views Asked by At

Consider a proof from a textbook on Harmonic Analysis:

enter image description here

Note that $\mathcal{S}(ℝ)$ denotes the Schwartz Space.

Question 1: Why does the top left formula in the proof start out as:

$$ \int_0^1 \left( \sum_{m ∈ ℤ} \phi(x+m) \right) \mathrm{e}^{-2 \pi i n x} \,\mathrm{d}x? $$

Shouldn't the integrand be $\int_{-\infty}^{\infty}$?

Question 2: What justifies the leap from

$$ \sum_{m ∈ ℤ} \int_0^1 \phi(x+m) \mathrm{e}^{-2 \pi i n x} \,\mathrm{d}x = \sum_{m ∈ ℤ} \int_m^{m+1} ϕ(y) \mathrm{e}^{-2 \pi i n y} \,\mathrm{d}y ? $$

4

There are 4 best solutions below

2
On BEST ANSWER

Question 1: Define the function $f$ by $$ f(x) := \sum_{m\in\mathbb{Z}} \phi(x+m). $$ Observe that $$ f(x+1) = \sum_{m\in\mathbb{Z}} \phi((x+1)+m) = \sum_{m\in\mathbb{Z}} \phi( x + (m+1) ) = \sum_{m'\in\mathbb{Z}} \phi(x+m') = f(x), $$ where $m' = m+1$. Thus $f$ is a $1$-periodic function. Since $f$ is a $1$-periodic function, we may compute its $n$-th Fourier coefficient using the usual integration, i.e. $$ \hat{f}(n) = \int_{0}^{1} f(x) \mathrm{e}^{-2\pi inx}\,\mathrm{d}x= \int_{0}^{1} f(x) \mathrm{e}^{-2\pi inx}\,\mathrm{d}x = \int_{0}^{1} \sum_{m\in\mathbb{Z}} \phi(x+m) \mathrm{e}^{-2\pi inx}\,\mathrm{d}x, $$ which answers your first question.

Question 2: This is a fairly straight-forward change of variables. Let $y = x+m$. Then (using the usual shorthand) $\mathrm{d}y = \mathrm{d}x$, which gives us \begin{align} \sum_{m\in\mathbb{Z}} \int_{0}^{1} \phi(x+m) \mathrm{e}^{-2\pi inx} \,\mathrm{d}x &= \sum_{m\in\mathbb{Z}} \int_{m}^{m+1} \phi(y) \mathrm{e}^{-2\pi in(y-m)} \,\mathrm{d}y && (\text{change of variables}) \\ &= \sum_{m\in\mathbb{Z}} \int_{m}^{m+1} \phi(y)\, \mathrm{e}^{-2\pi iny}\mathrm{e}^{2\pi inm}\, \mathrm{d}y \\ &= \sum_{m\in\mathbb{Z}} \int_{m}^{m+1} \phi(y) \mathrm{e}^{-2\pi iny}\, \mathrm{d}y. && (\text{since $\mathrm{e}^{2\pi ik} = 1 \forall k\in\mathbb{Z}$}) \end{align} This seems to answer your second question.

2
On

Elias Stein (famous analyst) tells a story about this theorem (folklore that I am going to mess up). He says (he gives names to at least one of these people, but I don't remember their names) a [math] professor has a class where he is talking to some of his three students. He asks the students, "Ok, given a Schwartz function, how can one get a periodic function?"

Student A says, "We can take the Fourier transform $\hat{\phi}$, apply it to integers getting $\hat{\phi}(n)$ and then form the trigonometric series $\sum_{n \in \mathbb{Z}}\hat{\phi}(n)e^{2\pi i n x}$." The Teacher says, "Good."

Student B says, "We can take the function $\phi$ and the value at $x$ should be the value gotten by adding the value of $\phi$ at all points on the real line an integer distance from $x$ getting $\sum_{n \in \mathbb{Z}}\phi(x+n)$." The Teacher responds well.

Student C (who is presumably Poisson), says, "These are both good... and they are equal!"

So, what we see that the idea is that we start with a function $\phi$ that is a Schwartz function on the real line. We then create a periodic function of period $1$ using both these formulas and the theorem says that they are equal. So, when the proof says that it just needs to make sure that they have the same Fourier coefficients, it is talking about these periodic functions that we were given have same Fourier coefficients because they are continuous. That is why in your first question they are only integrating on the interval $[0,1]$.

They are both continuous because $\phi$, being a Schwartz function, decays faster than any polynomial and so does its Fourier transform. In particular, there is a constant $C$ such that $\hat{\phi}(n) \leq \frac{C}{n^2}$ and $|\phi(y)| \leq \frac{C}{(1+y)^2}$ for all $n \in \mathbb{Z}$ and all $y\in\mathbb{R}$. Both of these bounds means that the the series converge uniformly and so the periodic functions that we made are continuous so their Fourier coefficients determine what function they are and we can interchange the integral and summation that you were wondering about.

0
On

there is lot easier than that

consider the fourier series expansion for the 1-periodic function

$$ \lfloor x\rfloor = x - \frac{1}{2} + \frac{1}{\pi} \sum_{k=1}^\infty \frac{\sin(2 \pi k x)}{k} $$

and get this into the Euler sum formula

and use Euler sum formula plus integration by parts

$$ \sum_{n=-\infty}^{\infty}f(n) = - \int_{-\infty}^{\infty}f'(x)\lfloor x] $$

and you have the Poisson sum formula, very easily

0
On

This was meant to be a comment in reply to a comment in Xander Henderson's nice answer, but it got too long.

The abelian group $A = \{x\}$ of the domain of complex-valued functions $f(x)$ we are thinking about is always understood in the context of the problem, and then the Fourier transform $\mathcal F\colon f(x)\mapsto \hat f(\xi)$ yields a complex-valued function $\hat f(\xi)$ on the "dual" group $\hat A = \{\xi\}$.

Also it's very common that the abelian groups $A$ and $\hat A$ have topological structure (because we want to talk about convergence), or in other words a notion of "open sets," and usually a rather descript one at that (locally compact is often assumed, as well as Hausdorff), or even smooth structure if we want to study the relationship between the operator $\mathcal F$ and differential operators, or use said relationship as (a collection of) tools for something else we're working on.

When passing from a rapidly decaying function $f(x)$ on $\mathbb R$ to its periodization, $f_{per}(x) = \sum_{n=-\infty}^\infty f(x-n)$ (say as a tool to prove the Poisson summation formula, for example), we consider $f_{per}(x)$ as a function on the circle ($S^1$). Then we need to keep in mind that although

  • $\hat{\mathbb R} = \mathbb R$, hence $\hat f(\xi)$ is a function of $\xi\in\mathbb R$
  • $\hat{S^1} = \mathbb Z$, so $\hat{f_{per}}(\xi)$ is a function of $\xi\in \mathbb Z$. So we usually use the psychologically convenient variable $n$ instead of $\xi$ and write $\hat{f_{per}}(n)$.