Does convergence in the sup norm imply convergence in distribution?

844 Views Asked by At

I got two sequences of stochastic process $(X_n(t))_{t \in [0,1]}$ and $(Y_{n}(t))_{t \in [0,1]}$, defined on a probability space $(\Omega, \mathcal{F},P)$, and know that their distance in the sup-norm on $[0,1]$ converges to $0$ almost surely, i.e.

$\sup \limits_{t \in [0,1]} \vert X_n(t) - Y_n(t) \vert \to 0 \quad P-a.s., \quad n \to \infty$,

or equivalently

$P \left (\lim \limits_{n \to \infty} \sup \limits_{t \in [0,1]} \vert X_n(t) - Y_n(t) \vert = 0 \right ) = 1$.

Now I'm wondering if this also implies the (pointwise) convergence in distribution of $X_n$ to $Y_n$. This result seems very intuitive, but how does one formally show this?

Thanks!

2

There are 2 best solutions below

0
On BEST ANSWER

What you want is Slutsky's theorem. If, for some $t$, the sequence $X_n(t)$ converges in distribution, and if $Y_n(t)-X_n(t)$ converges to $0$ in probability, then $Y_n(t)$ converges to the same limit law as $X_n(t)$. In your case you have almost sure convergence of $Y_n(t)-X_n(t)$ to $0$, which is stronger than what you need.

1
On

One way of interpreting the convergence of a sequence Xn to X is to say that the ''distance'' between X and Xn is getting smaller and smaller. For example, if we define the distance between Xn and X as P(|Xn−X|≥ϵ), we have convergence in probability. One way to define the distance between Xn and X is

E(|Xn−X|r), where r≥1 is a fixed number. This refers to convergence in mean. (Note: for convergence in mean, it is usually required that E|Xrn|<∞.) The most common choice is r=2, in which case it is called the mean-square convergence. (Note: Some authors refer to the case r=1 as convergence in mean.)

Convergence in Mean

Let r≥1 be a fixed number. A sequence of random variables X1, X2, X3, ⋯ converges in the rth mean or in the Lr norm to a random variable X, shown by Xn −→Lr X, if limn→∞E(|Xn−X|r)=0. If r=2, it is called the mean-square convergence, and it is shown by Xn −→−m.s. X.