If $(X_n)$ is independent with $X_n$ Bernoulli of parameter $\frac1n$, how to show that $X_n$ doesn't converge almost surely to $0$?

1.2k Views Asked by At

If $X_n \sim Bernoulli(\frac{1}{n})$ independently, then I know that because $\sum_{i=1}^{\infty} P(X_n = 1) = \infty$. Then by Borel-Cantelli, I know that $P(\limsup_{n \to \infty}\{X_n=1\}) = 1$. This means that the event $\{X_n = 1\}$ happens infinitely often with probability 1. However, I do not know how to rigorously use this to prove that $X_n$ doesn't converge to $0$ almost surely. I know that the definition of almost sure convergence is:

$$ P(\{\omega \in \Omega: \lim_{n \to \infty}X_n(\omega) = X\}) = 1 $$

How can I use the above to show that this doesn't hold?

2

There are 2 best solutions below

4
On BEST ANSWER

Note that $$\mathbb P\left(\left\{\omega\in\Omega:\lim_{n\to\infty} X_n(\omega)=X(\omega)\right\} \right)=1 $$ is equivalent to $$\mathbb P\left(\left\{\omega\in\Omega:\liminf_{n\to\infty} |X_n(\omega)-X(\omega)|<\varepsilon \right\}\right)=1, \text{ for all }\varepsilon > 0. $$ This follows directly from the definition of convergence of a sequence of real numbers: $$\omega\in\left\{\lim_{n\to\infty} X_n= X\right\}\implies \omega\in\bigcup_{n=1}^\infty\bigcap_{k=n}^\infty\{|X_n-X|<\varepsilon \}, \text{ for all }\varepsilon > 0.$$ Now, since \begin{align} \left(\limsup_{n\to\infty}\ \{X_n=1\}\right)^c &= \left(\bigcap_{n=1}^\infty\bigcup_{k=n}^\infty \{X_k=1\} \right)^c\\ &= \bigcup_{n=1}^\infty\bigcap_{k=n}^\infty \{X_k=0\}\\ &= \liminf_{n\to\infty}\ \{X_n=0\}, \end{align} we see that $$\mathbb P\left(\liminf_{n\to\infty}\ \{X_n=0\}\right)=0. $$ From this the result follows immediately.

0
On

Let us call $A$ the event $\{\limsup_{n \to \infty}X_n=1\}$ (shorthand notation for $\{\omega : \limsup_{n \to \infty}X_n(\omega)=1\}$). You have already proved that $P(A)=1$.

Let us call $B$ the event $\{\lim_{n \to \infty}X_n=0\}$ (shorthand notation for $\{\omega : \lim_{n \to \infty}X_n(\omega)=0\}$). Almost sure convergence is equivalent (according to your definition) to $P(B)=1$

But the events $A$ and $B$ are disjoint (this is not probability anymore, just the mere deterministic fact that a sequence that has infinitely many ones can't converge to zero).

So $B \subset A^C$ so $P(B) \leq P(A^C) = 1-P(A) = 1-1 = 0$. Hence $P(B)=0$ so the definition of almost sure convergence is not met.


That was the developed argument, once you get more accustomed to probability you'd say : Almost surely the sequence has infinetely many ones, so almost surely it does not converge to 0, so there can't be almost sure convergence.


Last remark, you haven't just proven no almost sure convergence, but the stronger statement of almost-sure non convergence. Try to wrap your head around these two statements, and to write their formal definition yourself as a good exercise.