I just read a problem in a book and they used this logic below. I'm trying to reason it out. Some help would be appreciated.
Let $ ( \Omega, \mathcal{F} )$ be a Measurable Space.
Suppose $X_n$ is a sequence of IID Random variables with $$ P (X_n > x ) = e^{-x},\quad x\ge 0. $$
By Borel Cantelli 1 & 2 , $$ P (X_n > \alpha \log(n) \text{ i.o} ) = \begin{cases} 0 & \alpha > 1, \\ 1 & \alpha \leq 1. \end{cases} $$
Let $L= \limsup_n \frac {X_n}{\log(n)} $.
The book then uses the fact that $$ \{\omega : X_n > \log(n) \text{ i.o} \} \subset \{ L \geq 1 \}.$$
This allows to infer that $P( L \geq 1) = 1$.
He later goes on to show $P ( L > 1 ) = 0 $ so that in fact $ P(L=1) = 1$.
This has nothing to do with probability. The question is essentially why, given a real sequence $\{a_n,n\ge 1\}$, the fact that $a_n>1$ infinitely often implies $\limsup_n a_n\ge 1$.
Well, $\limsup$ of a sequence is at least $\limsup$ of any subsequence. So if there is a subsequence $a_{n_k}\ge 1$ then $\limsup_n a_n \ge \limsup_{a_{n_k}} \ge 1$ (note that it is enough to have the inequality $a_{n_k}\ge 1$).