I have a sequence of non-decreasing random processes $D_n:[0,1]\rightarrow \mathbb{R}$ (for each $n\geq 1$, $u\leq v$ implies $D_n(u)\leq D_n(v)$) such that $D_n(0)=0$ a.s. and for every $t\in [0,1]$ the following convergence holds: $D_n(t)\overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}}t$ (in fact I can even prove it in $\mathbb{L}^2$, but it doesn't seem necessary).
I want to prove a uniform convergence in probability, i.e. $\sup_{t\in [0,1]} \vert D_n(t) -t\vert \overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}}0$.
I managed to prove it (more details below), but the idea is pretty similar to the proof of a standard analytic result (see Julian's answer for more details) : pointwise convergence of monotonous functions on a compact set to a continuous limit implies uniform convergence. I am asking :
- Is there a way to apply directly (without rewritting the proof) this theorem in such context, even if the functions are random ?
- If not, is there an ersatz of Dini's Theorem for convergence in probability ? It seems too obvious for not having been done yet...
N.B: The "standard analytic result" mentioned above is called "second Dini's Theorem" in french, but seems to have no english name or source.
My proof: Let $\varepsilon >0$, consider an integer $m>\frac{2}{\varepsilon}$. Then $\Big( \vert D_n(\frac{k}{m})-\frac{k}{m}\vert \leq \frac{\varepsilon}{2} \ \forall \ k=0,\dots, m\Big)$ implies $\sup_{t\in [0,1]}\vert D_n(t)-t\vert \leq \varepsilon$ (because the random functions $D_n$ are non-decreasing). Thus the probability of the first event is smaller or equal to the probability of the second, i.e.: $$\mathbb{P}\left(\left\vert D_n\left(\frac{k}{m}\right)-\frac{k}{m}\right\vert \leq \frac{\varepsilon}{2} \ \forall \ k=0,\dots, m\right)\leq \mathbb{P}\left(\sup_{t\in [0,1]}\vert D_n(t)-t\vert \leq \varepsilon\right).$$ If I consider the complementary events, I can use the union bound to get $$\mathbb{P}\left(\sup_{t\in [0,1]}\vert D_n(t)-t\vert > \varepsilon\right)\leq \sum_{k=0}^m \mathbb{P}\left( \left\vert D_n\left(\frac{k}{m}\right)-\frac{k}{m}\right\vert > \frac{\varepsilon}{2}\right).$$ The sum in the right-hand side converges to $0$ since it is a sum of finitely many terms going to $0$ (the choice of $m$ only depends on $\varepsilon$, not on $n$).
Edit: Thanks to the OP for spotting the flaw in my earlier argument.
There is a way to use well-known results only (but imo your proof is much nicer). I assume that $D_n$ is cadlag (since you only treat piecewise constant $D_n$, this is just a matter of defining the endpoints of the constant stretches appropriately). The space of cadlag functions $[0,1]\to\mathbb{R}$ is equipped with Skorohod's $M_1$ topology. I refer you to Whitt, Stochastic-Process Limits for details. There you can also find all of the following results.
We therefore aim to show tightness in $M_1$, because then, as the limit point is unique, $D_n\to\mathrm{id}$ weakly in $M_1$, hence in probability. The above then gives the result.
Tightness in $M_1$ is characterized by
It is easy to see that the assumed convergence in probability implies these two conditions. Hence, $(D_n)_n$ is tight in $M_1$ and we can conclude.