For example, I have a random variable $X$ such that $X_n = \sum_{k=1}^n I_k$ where $I_n = 1$ if $A_n$ occurs and $I_n=0$ otherwise.
I am looking at a proof that says that we should be able to find a subsequence $X_{n_k} (X_{n_1} < X_{n_2} < ...) $ such that ...
I don't understand what this statement means, can someone please clarify
More specifically I want to find a subsequence such that $\sum_{k=1}^\infty \frac{\sigma^2(X_{n_k})}{\epsilon^2E^2(X_{n_k})}<\infty$
What would a subsequence like that look like?
This is the proof I am looking at:


It's not a subsequence of a random variable, its a subsequence of a sequence of random variables. For each $n$, you have the random variable $X_n = \sum_{k = 1}^n I_k$. So you have a sequence: $I_1, (I_1 + I_2), (I_1 + I_2 + I_3), \dots$ and you take a subsequence of this sequence.
In this case, that's a sequence of indices $n_1, n_2, \dots$ and $$ X_{n_i} = \sum_{k = 1}^{n_i} I_k. $$