Suppose $X_1, \ldots, X_n \sim \operatorname{Unif}(0,1)$ are iid random variables and we define $X_{(n)} = \max X_i$ as the largest order statistic. I would like to show that $X_{(n)}$ converges in probability to $1$. To do so, I have:
For all $\varepsilon >0$,
$$ P(|X_{(n)}-1|\geq \varepsilon) = \prod_{i} P(X_i <1-\varepsilon) = (1-\varepsilon)^n $$
Now, to take the limit, I have an issue in that for $0<\varepsilon \leq 1$, as $n \to \infty$, $P(|X_{(n)}-1|\geq \varepsilon)\to 0$. But for $\varepsilon >1$, it blows up. What is the correct way of doing this problem here? Is there a bound on $\varepsilon$?
Would it be valid to write:
$$ P(X_{(n)}\leq 1-\varepsilon) = P(X_{(n)}\leq 1-\varepsilon)\mathbb{1}_{\varepsilon>1} + P(X_{(n)}\leq 1-\varepsilon) \mathbb{1}_{\varepsilon\leq 1} \text{ ?} $$
For $\epsilon \gt 1$ and using $X_i \in [0,1]$ for all $i$, you have:
$P(|X_{(n)}-1|\geq \epsilon) = 0$ since $-1 \le X_{(n)}-1 \le 0$ and $\epsilon \gt 1$
$\displaystyle \prod_{i} P(X_i <1-\epsilon) = 0$ since $0 \le X_i \le 1$ and $1-\epsilon \lt 0$
$(1-\epsilon)^n \not = 0$ since $1-\epsilon \lt 0$
so your second equality is incorrect when $\epsilon \gt 1$, but it does not affect your convergence since you know that in this case $P(|X_{(n)}-1|\geq \epsilon) = 0$