I'm trying to get an intuitive understanding of different types of convergence for random variables. Say we have random variables $X_1, X_2, \ldots$ and $X$ with respecive distribution functions $F_1, F_2, \ldots$ and $F$
Convergence in distribution is easy enough, it just means $$ F_n(x) \to F(x) $$ for every $x$ at which $F$ is continuous, and we write $X_n \overset{d}{\to} X$.
What I would like is a qualitatively similar statement for convergence in probability. It is defined as $$ \lim_{n \to \infty }\mathbb{P}\big(\lvert X_n - X \rvert > \varepsilon\big) = 0 $$ for all $\varepsilon > 0$. We can rewrite the probability in terms of the distribution function of $\lvert X_n - X \rvert$ as
\begin{align*} \mathbb{P}(\lvert X_n - X \rvert > \varepsilon) &= 1 - \mathbb{P}(\lvert X_n - X \rvert \leq \varepsilon) \\ &= 1 - \hat F_n(\varepsilon), \end{align*} so that we have $$ \lim_{n \to \infty} \hat F_n(\varepsilon) = 1, $$ where $\hat F_n$ is the distribution function of $\lvert X_n - X \rvert$, implying that $\hat F_n$ converges pointwise to the constant function $1$.
Is my reasoning correct here? It feels a little fishy because the constant function is not a cdf, but I guess a sequence of cdfs can converge to something that is not necessarily a cdf? Any insight into this would be appreciated.
$\hat F_n(\varepsilon) \to 1$ for $\epsilon >0$ but $\hat F_n(x) \to 0$ for $x <0$. $\lim_n \hat F_n$ is a step function: It is the distrbution function of the constant random variable $0$.