Let $X_1,\ldots,X_n$ be iid random variables with common pdf $$f(x) = e^{−(x−\theta)}\quad, x > \theta ,\, − \infty < \theta < \infty\,; \quad 0 \quad\text{elsewhere}$$
Let $Y_n = \min({X_1,\ldots,X_n})$. Prove that $Y_n\to\theta$ in probability, by first obtaining the cdf of $Y_n$.
By definition, we want to show that $\lim_{n\to\infty}$ $P[|Y_n-\theta|\le\epsilon]=1$.
$$\lim_{n\to\infty}P[-\epsilon<Y_n-\theta\le\epsilon]=\lim_{n\to\infty}P[-\epsilon+\theta<Y_n\le\epsilon+\theta]\tag{1}$$
$$\lim_{n\to\infty}P[Y_n-\theta\le\epsilon]=\lim_{n\to\infty}P[Y_n\le\epsilon+\theta]\tag{2}$$
I know that $(2)$ helps me prove this, but why is it ok to skip $(1)$ ?
From the definition of $f$ it is clear that $X_i \geq \theta$ for each $i$. Hence $Y_n \geq \theta $. Now $P(|Y_n-\theta| \geq \epsilon)= P(Y_n \geq \theta+\epsilon)=(P(X_1 \geq \theta+\epsilon))^{n}=(1-e^{-\epsilon})^{n} \to 0$ as $ n \to \infty$.
The reason you can skip $Y_n \leq \theta -\epsilon$ is $Y_n \geq \theta $ almost surely.