From pointwise convergence in probability to uniform convergence in probability for non-decreasing random processes

787 Views Asked by At

I have a sequence of non-decreasing random processes $D_n:[0,1]\rightarrow \mathbb{R}$ (for each $n\geq 1$, $u\leq v$ implies $D_n(u)\leq D_n(v)$) such that $D_n(0)=0$ a.s. and for every $t\in [0,1]$ the following convergence holds: $D_n(t)\overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}}t$ (in fact I can even prove it in $\mathbb{L}^2$, but it doesn't seem necessary).

I want to prove a uniform convergence in probability, i.e. $\sup_{t\in [0,1]} \vert D_n(t) -t\vert \overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}}0$.

I managed to prove it (more details below), but the idea is pretty similar to the proof of a standard analytic result (see Julian's answer for more details) : pointwise convergence of monotonous functions on a compact set to a continuous limit implies uniform convergence. I am asking :

  • Is there a way to apply directly (without rewritting the proof) this theorem in such context, even if the functions are random ?
  • If not, is there an ersatz of Dini's Theorem for convergence in probability ? It seems too obvious for not having been done yet...

N.B: The "standard analytic result" mentioned above is called "second Dini's Theorem" in french, but seems to have no english name or source.

My proof: Let $\varepsilon >0$, consider an integer $m>\frac{2}{\varepsilon}$. Then $\Big( \vert D_n(\frac{k}{m})-\frac{k}{m}\vert \leq \frac{\varepsilon}{2} \ \forall \ k=0,\dots, m\Big)$ implies $\sup_{t\in [0,1]}\vert D_n(t)-t\vert \leq \varepsilon$ (because the random functions $D_n$ are non-decreasing). Thus the probability of the first event is smaller or equal to the probability of the second, i.e.: $$\mathbb{P}\left(\left\vert D_n\left(\frac{k}{m}\right)-\frac{k}{m}\right\vert \leq \frac{\varepsilon}{2} \ \forall \ k=0,\dots, m\right)\leq \mathbb{P}\left(\sup_{t\in [0,1]}\vert D_n(t)-t\vert \leq \varepsilon\right).$$ If I consider the complementary events, I can use the union bound to get $$\mathbb{P}\left(\sup_{t\in [0,1]}\vert D_n(t)-t\vert > \varepsilon\right)\leq \sum_{k=0}^m \mathbb{P}\left( \left\vert D_n\left(\frac{k}{m}\right)-\frac{k}{m}\right\vert > \frac{\varepsilon}{2}\right).$$ The sum in the right-hand side converges to $0$ since it is a sum of finitely many terms going to $0$ (the choice of $m$ only depends on $\varepsilon$, not on $n$).

2

There are 2 best solutions below

7
On

Edit: Thanks to the OP for spotting the flaw in my earlier argument.

There is a way to use well-known results only (but imo your proof is much nicer). I assume that $D_n$ is cadlag (since you only treat piecewise constant $D_n$, this is just a matter of defining the endpoints of the constant stretches appropriately). The space of cadlag functions $[0,1]\to\mathbb{R}$ is equipped with Skorohod's $M_1$ topology. I refer you to Whitt, Stochastic-Process Limits for details. There you can also find all of the following results.

Lemma 12.4.2. Suppose that $f_n\to f$ in $M_1$. If $f$ is continuous, then $f_n\to f$ uniformly.

We therefore aim to show tightness in $M_1$, because then, as the limit point is unique, $D_n\to\mathrm{id}$ weakly in $M_1$, hence in probability. The above then gives the result.

Tightness in $M_1$ is characterized by

Theorem 12.12.3. Let $(D_n)_n$ be a sequence of random variables with values in the $M_1$-Skorohod space. If $D_n$ is monotone increasing for each $n$ and $D_n(0)=0$, then $(D_n)_n$ is tight iff $\lim_{c\to\infty}\limsup_n P(D_n(1)>c)=0$ and $$ \limsup_{\eta\to 0}\lim_{\delta\to 0}\limsup_{n}P\big(D_n(\delta)\vee D_n(T)-D_n(T-\delta)\geq\eta\big)=0. $$

It is easy to see that the assumed convergence in probability implies these two conditions. Hence, $(D_n)_n$ is tight in $M_1$ and we can conclude.

0
On

Because of the tightness shown by Julian, the sequence $(D_n)$ (viewed as elements of the Skorokhod space with the $M_1$ topology) converges in distribution to the identity process. Now the $M_1$ topology is Polish, so by a theorem of Skorokhod there is a probability space $(\Omega,\mathcal F,\Bbb P)$ and random proceses $X_1(t), X_2(t),\ldots$, $0\le t\le 1$, such that (i) $X_n$ has the same distribution as $D_n$ for each $n$, and (ii) $X_n\to X$ in the $M_1$ sense, a.s. (Here $X(t)=t$ for $0\le t\le 1$.) By the fact cited about the convergence of monotone functions to a continuous limit, the convergence of $X_n$ to $X$ is uniform in $t$, a.s. In particular, $\sup_t|X_n(t)-t|\to 0$ in probability, which in turn implies that $\sup_t|D_n(t)-t|\to 0$ in probability.