I got two sequences of stochastic process $(X_n(t))_{t \in [0,1]}$ and $(Y_{n}(t))_{t \in [0,1]}$, defined on a probability space $(\Omega, \mathcal{F},P)$, and know that their distance in the sup-norm on $[0,1]$ converges to $0$ almost surely, i.e.
$\sup \limits_{t \in [0,1]} \vert X_n(t) - Y_n(t) \vert \to 0 \quad P-a.s., \quad n \to \infty$,
or equivalently
$P \left (\lim \limits_{n \to \infty} \sup \limits_{t \in [0,1]} \vert X_n(t) - Y_n(t) \vert = 0 \right ) = 1$.
Now I'm wondering if this also implies the (pointwise) convergence in distribution of $X_n$ to $Y_n$. This result seems very intuitive, but how does one formally show this?
Thanks!
What you want is Slutsky's theorem. If, for some $t$, the sequence $X_n(t)$ converges in distribution, and if $Y_n(t)-X_n(t)$ converges to $0$ in probability, then $Y_n(t)$ converges to the same limit law as $X_n(t)$. In your case you have almost sure convergence of $Y_n(t)-X_n(t)$ to $0$, which is stronger than what you need.