I'm struggling with following problem:
Let $X_1, X_2, \dots $ be identically distributed (not necessarily independent) non-negative random variables with finite expected values. Show that for any $\epsilon > 0$, $$\lim_{n \to \infty}P\left(\max_{1\le i\leq n} X_i > n\epsilon\right) = 0$$
I can easily solve it if the variables are independent (e.g. using Doob's martingale inequality), but I'm struggling to prove above without that assumption. I've tried e.g. bounding the probabilities in the following way:
$$P\left(\max_{1\le i\leq n} X_i > n\epsilon\right) \leq \sum P(X_i > n\epsilon)$$
But after applying Markov's inequality the bound seems 'too loose'.
I was also considering using Doob's inequality in that case as well, but I was not able to construct appropriate martingale.
The exact question I'm asking is: is the theorem in question true?
The "crude bound" you suggest works: we write $$ \mathbb P\left(\max_{1\leqslant i\leqslant n}X_i>n\varepsilon\right)=\mathbb P\left(\bigcup_{i=1}^n\left\{X_i>n\varepsilon\right\}\right)\leqslant \sum_{i=1}^n\mathbb P\left(\left\{X_i>n\varepsilon\right\}\right). $$ Since the random variable $X_i$ have the same distribution as $X_1$, it follows that $$ \mathbb P\left(\max_{1\leqslant i\leqslant n}X_i>n\varepsilon\right)\leqslant n\mathbb P\left(\left\{X_1>n\varepsilon\right\}\right). $$ Integrating the pointwise inequality $$ n\varepsilon\mathbf 1\left\{X_1>n\varepsilon\right\}\leqslant X_1\mathbf 1\left\{X_1>n\varepsilon\right\} $$ gives that $$ n\varepsilon\mathbb P\left(\left\{X_1>n\varepsilon\right\}\right)\leqslant \mathbb E\left[X_1\mathbf 1\left\{X_1>n\varepsilon\right\}\right] $$ hence $$ \mathbb P\left(\max_{1\leqslant i\leqslant n}X_i>n\varepsilon\right)\leqslant \frac 1\varepsilon \mathbb E\left[X_1\mathbf 1\left\{X_1>n\varepsilon\right\}\right]. $$ The monotone convergence theorem allows to conclude.