On the charateristic function of random variables

81 Views Asked by At

For all random variables that admit a probability density function (PDF), their characteristic function provides an alternative way to completely define its probability distribution. Why is that? The explanation I gave to myself is that the Fourier transform operator $\mathcal{F}$ is unitary on $L^2$, and the PDF as a (Radon-Nikodym) derivate, is unique every time it exists. So for square-integrable PDFs, this makes sense... but generally speaking, the only requirement for a PDF $p(x,t)$ is to be integrable ($L^1$). Who guarantees then that the map $\mathcal{F}[p](k,t)$ uniquely defines an object?

1

There are 1 best solutions below

8
On BEST ANSWER

What do you mean by the "reason for this fact" ? . Using the characteristic function , you can recover the distribution $\mu$

$$\mu(a,b)+\frac{1}{2}\mu(\{a\})+\frac{1}{2}\mu(\{b\})=\frac{1}{2\pi}\lim_{M\to\infty}\int_{-M}^{M}\frac{e^{-ita}-e^{-itb}}{it}\psi(t)\,dt$$

where $\psi$ is the characteristic function. So knowing $\psi$ uniquely determines the distribution $\mu$.

And it's true for ALL random variables and not necessarily for those which has a PDF.

As you can see that we are able to recover the Measure(distribution) itself that the random variable induces on $\Bbb{R}$. This is because if the measure of intervals of the form $(a,b]$ is known (i.e. $\mu(a,b]$ is known) then the measure $\mu$ is uniquely determined due to the Caratheodory Extension Theorem. Alternatively , if $\mu$ and $\nu$ be two Probability Measures agreeing on a $\pi$-system that generates the Borel Sigma algebra , then due to the Sierpinski-Dynkin $\pi-\lambda$ theorem , $\mu=\nu$ on the Borel Sigma algebra. In this case $\{\bigcup_{i=1}^{n}(a_{i},b_{i}]\,,a_{i},b_{i}\in\Bbb{R}\,,n\in\Bbb{N}\}$ is a $\pi$ system that generates the Borel Sigma Algebra.

For a proof of the Inversion Formula I used see Durrett Theorem $3.3.11$ . Otherwise refer to Theorem $9.5.1$ Sidney Resnick - A Probability Path