Clarification on convergence almost surely

232 Views Asked by At

The three aspects within convergence in distributions include:

  1. Convergence in distribution
  2. Convergence in probability
  3. Convergence almost surely

Convergence in distribution and in probability intuitively make sense to me, since you can quite easily visualise what limit you are trying to evaluate as ${n\to \infty} $. However, convergence almost surely isn't quite the case. When trying to prove convergence almost surely, which is if and only if $ \Bbb{P}( [X_n-X|>\epsilon] $ infinitely often)=$0$. An event, say $A_n$, is said to happen infinitely often if $$\sum_{n=1}^\infty \Bbb{P}(A_n)< \infty.$$ However, what does it mean (intuitively) for a random variable to converge almost surely? And, what does the set notation of {$A_n$} infinitely often or eventually mean, i.e {$A_n$ infinitely often} = $\bigcap_{n=1}^\infty \bigcup_{m=n}^\infty A_m$ and {$A_n$ eventually} = $\bigcup_{n=1}^\infty \bigcap_{m=n}^\infty A_m$?

If there's a simple way of portraying this example (eg a coin toss, die being rolled etc), the help would be greatly appreciated.

1

There are 1 best solutions below

0
On

I don't know what you mean by intuitively, but the definition of almost sure convergence is very simple to read: it says that $\mathbb P(X_n \to X) = 1$. Let's say you have a sequence $\{X_n\}$ of random variables all defined on a probability space $(\Omega, \Sigma, \mathbb P)$. We say that $X_n \to X$ almost surely if the event $$ \{ \omega \in \Omega \, | \, X_n(\omega) \to X(\omega)\} $$ has probability $1$, which is what the notation $\mathbb P(X_n \to X) = 1$ suggests.

Now that you have this event, you can express it in a way that is more useful for computing bounds, such as expressing this event as a limit of other events. Note that $X_n(\omega) \to X(\omega)$ is defined as $$ \forall \varepsilon > 0, \quad \exists N_{\varepsilon} \quad s.t. \quad \forall n > N_{\varepsilon}, \quad |X_n(\omega) - X(\omega)| \le \varepsilon. $$ In other words, for all $\varepsilon > 0$, $|X_n(\omega) - X(\omega)| > \varepsilon$ happens only finitely many times. This can be used to translate the above event to $$ \bigcap_{\varepsilon > 0} \left(\bigcup_{N \in \mathbb N} \bigcap_{n \ge N} \{ \omega \in \Omega \, | \, |X_n(\omega) - X(\omega)| \le \varepsilon \} \right) $$ Since the collections of events under the intersection over $\varepsilon > 0$ form a decreasing collection, it has probability $1$ if and only if for each $\varepsilon > 0$, $$ \mathbb P \left( \bigcup_{N \in \mathbb N} \bigcap_{n \ge N} \{ \omega \in \Omega \, | \, |X_n(\omega) - X(\omega)| \le \varepsilon \} \right) = \liminf_{n \to \infty} \mathbb P(|X_n-X| \le \varepsilon) = 1. $$ Note that this $\liminf$ was written just for computational simplicity, but conceptually, if the $\liminf$ is $1$, since all the values are probabilities, they are $\le 1$, so the $\limsup$ is also $1$ and the $\liminf$ is a well-defined limit.

(Remark: when intersecting and taking unions of events over ranges like this, it's useful to read intersections as "for every (possible subscript)" and unions as "there exists (a subscript)". So

"for every $\varepsilon > 0$" $\to$ $\bigcap_{\varepsilon > 0}$

"there exists $N$" $\to$ $\bigcup_{N \in \mathbb N}$

etc.

Hope that helps,