Explanation of $\lim\sup$ of a sequence of random variables in measure theory

419 Views Asked by At

The definition I have been given of the $\limsup\limits_{n \to \infty} Y_n$ where the $Y_n$ are random variables is that it is another random variable defined as $(\limsup\limits_{n \to \infty} Y_n)(\omega) = \limsup\limits_{n \to \infty} Y_n(\omega)$ for any $\omega \in \Omega$ where the $\lim\sup$ on the R.H.S is the standard $\lim\sup$ of a sequence of reals (as that is the definition of $\lim\sup$ for a sequence of measurable functions).

However I am then given the question to prove that if we have $X_i$ as IID standard $N(0,1)$ random variables then we get $\limsup\limits_{n \to \infty} \frac{X_n}{\sqrt{2\log(n)}} = 1$ almost surely. In this case if I use the definition given above we get that $X_n(\omega) = X_m(\omega)$ for any $n,m$ and $\omega$ as all the $X_i$ have the same distribution. So our sequence is then just decaying to $0$ so the $\lim\sup$ should be $0$ and not $1$ almost surely.

I'm sure that I'm the one who has a conceptual error somewhere but I can't see what. I know the question is not wrong as other people have asked it here. Please help me clear up this confusion.

1

There are 1 best solutions below

3
On BEST ANSWER

Two random variables having the same distribution does not mean they take the same value on each $\omega$. Indeed, independence already kill that possibility unless the random variable is constant (a.s.).