How do we apply 3.3.2 to $X-Y$ to get the first equality of $$\lim_{T\to \infty} \frac{1}{T} \int^T_{-T} | \phi(t)|^2 dt = P(X-Y=0) = \sum_x \mu(\{x\})^2,$$ in 3.3.3? How does it imply that if $\varphi(t) \to 0$ as $t \to \infty$, $\mu$ has no point masses?
2026-04-22 20:42:22.1776890542
On
A probability measure that has no point masses
277 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
There are 2 best solutions below
0
On
$P(X=Y)=\int \int_{x=y} d\mu(x)d\mu(y)$ if $X$ and $Y$ are i.i.d. with distribution $\mu$. Now $\int \int_{x=y} d\mu(x)d\mu(y)\int \mu\{x\} d\mu(x)$. For any probability measure $\mu$ there are at most countable many points $x$ with $\mu \{x\}>0$. Call these $x_1,x_2,...$ in the case of $\mu$. Then the integral becomes $\sum (\mu \{x_i\})^{2}$.
The characteristic function of $X-Y$ is $|\phi|^{2}$. Hence 3.3.3. follows from 3.3.2.
If $\phi(t) \to 0$ as $t \to \infty$ then LHS of the equation in 3.3.3. is $0$ so $\mu \{x\}=0$ for all $x$.


Since $\varphi$ is the characteristic function of $X$ and of $Y,$ and since those variables are independent, the characteristic function of $X - Y$ is $$ \varphi(t) \varphi(-t) = \lvert \varphi(t)\rvert^2 .$$
If $\mu_{(X-Y)}$ is the distribution of $X-Y$ then $\mu_{(X-Y)}(\{0\}) = P(X - Y = 0).$ Applying part (i) of $3.3.2$ to the random variable $X - Y$, since $e^{-it\cdot 0} = 1$ we have $$ \mu_{X-Y}(\{0\}) = \lim_{T\to \infty} \frac1{2T} \int^T_{-T} \lvert \varphi(t)\rvert^2 dt.$$
Two things equal to $\mu_{X-Y}(\{0\})$ are equal to each other, hence the first equality in $3.3.3.$
The probability that $X=Y=x$ is $(\mu(\{x\}))^2,$ hence the second equality.
If $\varphi(t)\to 0$ as $t \to \infty$ then
$$ \lim_{T\to \infty} \frac1{2T} \int^T_{-T} \lvert \varphi(t)\rvert^2 dt = 0,$$
and $\sum(\mu(\{x\}))^2 = 0$ implies $\mu(\{x\}) = 0$ for all $x.$