Convergence in probability and in law

94 Views Asked by At

I wanted do solve the following task: Let $Y_i$, $i=1,...$ be iid's with $E[Y_i]=0$ and $E[Y_i^2]=\sigma^2 \in (0,\infty)$, compute $\lim_{n \to \infty} \mathbb{P}(S_n=0)$ with $S_n:=\sum_i^n Y_i$.

I wanted to prove first that it converges in probability. Since $\frac{S_n}{n}=0 \Leftrightarrow S_n=0 \Rightarrow \mathbb{P}(S_n=0)=\mathbb{P}(\frac{S_n}{n}=0)$, I thought that we could apply the weak law of large numbers supposing that $\mathbb{P}(\frac{S_n}{n}=0)=1-\mathbb{P}(|\frac{S_n}{n}|>\epsilon)\; \forall\epsilon>0$. It would follow that by the WLLN $\lim_{n \to \infty}\mathbb{P}(|\frac{S_n}{n}|>\epsilon)=0 \Rightarrow \lim_{n \to \infty} \mathbb{P}(\frac{S_n}{n}=0)=1 \Rightarrow \lim_{n \to \infty} \mathbb{P}(S_n=0)=1$.

However, I then saw a solution using convergence in law, according to which $\forall \epsilon>0$, $\mathbb{P}(S_n=0) \leq \mathbb{P}(S_n \in (-\epsilon\sqrt n, \epsilon\sqrt n)=\mathbb{P}(\sqrt nS_n \in (-\epsilon, \epsilon)) \to \mu((-\epsilon,\epsilon))$ with $\mu \sim N(0,\sigma^2)$ by the Central limit theorem. This implies that $\lim_{n \to \infty} \mathbb{P}(S_n=0)\leq \mu((-\epsilon,\epsilon))$ and by letting $\epsilon \to 0$, that $\lim_{n \to \infty} \mathbb{P}(S_n=0)=0$.

Am I confusing something in the convergence in probability part, is this last proof a contradiction to my first proof and have I misunderstood the two types of convergence?

Thanks in advance

2

There are 2 best solutions below

2
On

For any random variable $X$, $$1-\mathbb{P}[X=0]=\lim_{k\to\infty}{\mathbb{P}\left[X\geq\frac{1}{k}\right]}$$

You know that $$\lim_{n\to\infty}{\mathbb{P}\left[\frac{S_n}{n}\geq\frac{1}{k}\right]}=0$$ Thus $$0=\lim_{k\to\infty}{\lim_{n\to\infty}{\mathbb{P}\left[\frac{S_n}{n}\geq\frac{1}{k}\right]}}$$ But you want to conclude that $$0=\lim_{n\to\infty}{\lim_{k\to\infty}{\mathbb{P}\left[\frac{S_n}{n}\geq\frac{1}{k}\right]}}$$ instead.

In general, limits cannot be interchanged. This is one of those cases.

In particular, what happens is you have a "traveling hump": for each $k$, most of the "weight" of $\mathbb{P}\left[\frac{S_n}{n}\geq\frac{1}{k}\right]$ is concentrated in small $n$ terms (from about $0$ to $\sqrt{k}$, I believe). As $k$ increases, the hump shifts to terms corresponding to larger $n$. Fixing $k$ and taking the limit first allows us to travel past this hump and take a limit of zeros. But summing first always includes the hump terms, so the limit arrives too late to remove them.

0
On

It seems that you want to use that the convergence $Y_n\to 0$ in probability gives information on $\mathbb P(Y_n=0)$. However, the later quantity can be any number between $0$ and $1$. Indeed, if $(p_n)_{n\geqslant 1}$ is a sequence of elements of the unit interval, let $\Omega$ be the unit interval with Lebesgue measure and $Y_n=n^{-1}\mathbf{1}_{(p_n,1)}$ Then $Y_n\to 0$ in probability and $\mathbb P(Y_n=0)=p_n$