A measure and the Fourier inverse transform of its Fourier transform

3.2k Views Asked by At

Given a finite Borel measure $\mu$ on $\mathbb{R}^d$, consider the Fourier inverse transform of its Fourier transform: $\mathcal{F}^{-1}(\mathcal{F}(\mu))$, where $$ \mathcal{F}(\mu)(\xi) = \int \exp(-2\pi i x \cdot \xi) d\mu(x). $$ and $$ \mathcal{F}^{-1}(g)(\xi) = \int \exp(2\pi i x \cdot \xi) g(x) dx. $$ How is $\mu$ related to $\mathcal{F}^{-1}(\mathcal{F}(\mu))$?

In particular, how are their supports related?

For context, I know that if instead of a measure $\mu$ we had a function $f$, we would have $$\mathcal{F}^{-1}(\mathcal{F}(f)) = f$$ (under suitable hypotheses, such as $f$ and $\mathcal{F}(f)$ in $L^1$). I don't expect anything like this to hold for measures. But I thought perhaps the supports of $\mathcal{F}^{-1}(\mathcal{F}(\mu))$ and $\mu$ might be related.

For context, I know that if $\mathcal{\mu} \in L^1(\mathbb{R}^d)$, then $\mu$ has a continuous density.

2

There are 2 best solutions below

2
On BEST ANSWER

Take $\mu=\delta$ and see how $g$ fails to be integrable. Therefore the integral in $\mathcal{F}^{-1}$ needs not make sense. In this case the Fourier inversion formula is still true, and morally it is the one you mention, but you have to give an appropriate sense to the integral defining $\mathcal{F}^{-1}$ (see EDIT below).

For functions you might have the same problem. The only case in which the Fourier inversion formula works as you expect is when $f$ belongs to the so-called "inversion space" $$\mathcal{I}=\left\{ f\in L^1(\mathbb{R}^n) | \hat{f}\in L^1(\mathbb{R}^n)\right\}.$$ A typical class of functions belonging to the inversion space is the Schwartz class.

EDIT. Here's the result I am referring to, taken from Billingsley's "Probability and measure", Theorem 26.2: if $\mu$ is a probability measure, and if $\mu(\{a\})=\mu(\{b\})=0$ $$ \mu((a, b]) = \lim_{T\to \infty} \frac{1}{2\pi}\int_{-T}^T \frac{e^{-it a} - e^{-it b}}{t} \mathcal{F}\mu\left(-\frac{t}{2\pi}\right)\, dt.$$ Notes:

  1. the text uses the standard notation of probability theory $\phi(t)=\mathcal{F}\mu\left(-\frac{t}{2\pi}\right)$.
  2. the assumption that $\mu(\{a\})=\mu(\{b\})=0$ can be removed at the cost of making the formula more complicated. There is something on this in exercise 26.12 of the mentioned book. Measures for which $\mu(\{x\})=0$ for all points are called atomless.

In any case, if $\mu$ is atomless and if you interpret $\mathcal F^{-1}\mathcal F \mu$ in the sense of the formula above, then you can say that it is exactly the same measure as $\mu$. In particular it has the same support. If $\mu$ has atoms there is some more work to be done but the result is the same.

0
On

$\mathcal{F}^{-1}[\mathcal{F}[\mu]]$ exists only as a distribution. It vanishes outside of the support of $\mu$.

Let $f_\epsilon(x) = \int_{\mathbb{R}^d} \frac{1}{|\epsilon|^d}e^{-\pi |x-y|^2/\epsilon^2} d\mu(y)$. If $\mu$ is a compactly supported Borel measure then $f_\epsilon$ is a Schwartz function.

$\hat{f}_\epsilon(\xi) = \mathcal{F}[\mu](\xi) e^{-\pi |\xi|^2 \epsilon^2}$ and $\mu(A) = \lim_{\epsilon \to 0} \int_Af_\epsilon(x)dx = \lim_{\epsilon \to 0} \int_A \mathcal{F}^{-1}[\hat{f_\epsilon}](x)dx$.

This shows that $f_\epsilon \to 0$ locally uniformly outside of the support of $\mu$