Given $X_1,...,X_{100}$ ~ iid Unif(0,1), with $X_{n:n}$ being the largest order statistic, I'm asked to show for $n=100$ and some $\epsilon > 0$, $P(|X_{n:n}-1|\ge 1) = P(X_{n:n} \ge 1-\epsilon)$.
I'm totally lost however, because I'm coming to the conclusion that the statement is not true: $P(|X_{n:n}-1|\ge 1) = P(X_{n:n} \ge 2) + P(X_{n:n} \le 0) = 0 + P(X_{n:n} \le 0) = P(X_{n:n} < 0) + P(X_{n:n}=0)= 0 + 0 = 0$, because the distribution ranges from $0 \le x \le 1$, and given it's a continuous distribution, the probability any $x$ takes any exact value i.e. $x=0$ is 0.
Yet clearly $P(X_{n:n} \ge 1-\epsilon)$ is non-zero (right?). I would think the previous inequality always takes on a range (as opposed to exact value), a range like $(0.9999999999999,1)$ is still a range, so the probability would be non-zero. How would I show the inequality does in fact equal 0? (I assume it must, because I think my initial calculations were correct).