Donsker's theorem

713 Views Asked by At

I have a question regarding Donsker's theorem.

First, let us clarify some notation to address my issue: I identify a Brownian motion $\mathbb{B}(\cdot)$ such that $$\mathbb{B}(t): \Omega \to \mathbb{R}, \omega \mapsto \mathbb{B}(t)(\omega) \text{ is a random variable for any $ t > 0$}$$ and further all paths are continuous, i.e. $\mathbb{B}(\cdot)(\omega): \mathbb{R} \to \mathbb{R}, t \mapsto \mathbb{B}(t)(\omega)$ is continuous for (almost any) $\omega \in \Omega$. (I omit all the other properties the Brownian motion has to satisfy)

Could I thus identify a Brownian motion as a random variable with values in $C(0,\infty)$ via $$\mathbb{B}(\cdot): \Omega \to C(0,\infty), \omega \mapsto \mathbb{B}(\cdot)(\omega)?$$

However, the issue I wanted to address here is the following problem with Donsker's theorem. Let $$S_n(t) = \frac{1}{\sqrt{n}} \left(S_{[nt]} + \{nt\}X_{[nt]+1}\right).$$

Does Donsker's theorem say that $$S_n(t) \overset{\mathcal{D}}{\to} \mathbb{B}(t)$$ converges pointwise (convergence in distribution of real valued random variables) or does it say $$S_n(\cdot) \overset{\mathcal{D}}{\to} \mathbb{B}(\cdot)$$ (convergence in distribution of $C(0,\infty)$-valued random variables)?

Or are both the same? To me the second version seems stronger.

Thanks in advance.

1

There are 1 best solutions below

0
On

Usually, Donsker's principle is applied to a sequence $\{X_i\}_{i=1}^{\infty}$ of i.i.d. random variables with mean zero and finite variance (wlog assume that it is equal to 1). Then using these random variables you can construct a sequence of continuous processes $S_t^{(n)}$, which converges to a Brownian motion in distribution sense, i.e. the measures induced by $S_t^{(n)}$ on the space $C[0, \infty)$ converge to a Wiener measure, under which the coordinate mapping process is a standard Brownian motion.

Coming back to your question, Donsker's theorem tells that convergence happens in distribution, not pointwise.

In addition, if you fix a particular time $t_0$, then $S_{t_0}^{(n)}$ will converge in distribution to a random variable, which is $\mathcal{N}(0, t)$ and "comes from" a Brownian motion. More precisely, for any $0 \leq t_1 < \cdots < t_d < \infty$, $(S_{t_1}^{(n)}, \dots, S_{t_d}^{(n)})$ converges in distribution to $(B_{t_1}^{(n)}, \dots, B_{t_d}^{(n)})$ as $n \to \infty$, where $B_t$ is a Brownian motion. However, this convergence also happens only in distribution.

Hope this helps.