Prove that Brownian Motions have normal distribution using central limit theorem

143 Views Asked by At

In the book Brownian Motion, 3rd edition by Rene Schilling, he defines a $d$-dimensional Brownian motion $B = (B_t)_{t\geq0}$ indexed by $[0,\infty)$ taking values in $\mathbb R^d$ as a process that satisfies

$$ \text{(B0)} \quad B_0 = 0, \quad a.s \\ \text{(B1)} \quad B_{t_n} - B_{t_{n-1}}, B_{t_{n-1}} - B_{t_{n-2}},..., B_{t_1}- B_{t_0}, \quad \text{are independent for all} \quad 0 =t_0 \leq t_1 \leq...\leq t_n \\ \text{(B2)} \quad B_t - B_s \sim B_{t+h} - B_{s+h} \quad \text{for all} \quad 0 \leq s <t, h \geq -s \\ \text{(B3)} \quad B_t -B_s \sim N(0,t-s)^{\otimes d}, \quad N(0,t)(dx) = \frac{1}{\sqrt{2\pi t}} \exp \left( -\frac{x^2}{2t} \right)dx \\ \text{(B4)} \quad t \mapsto B_t(\omega) \quad \text{ is continuous for all $\omega$} $$
The author claims that if we have B0, B1, B2, and B4, we will automatically have B3 as a consequence of central limit theorem but I don't see how it's done. Clearly, my first intuition is to divide $[0,t]$ into many sub-intervals, but no matter how we divide, $B_t$ can not be expressed as a limit ($B_t = \lim_{n} \frac{1}{\sqrt n}(X_1+...+X_n)$) for some iid sequence $(X_n)$ to have central limit theorem in use.

1

There are 1 best solutions below

6
On

As mentioned here Application of central limit theorem for triangular arrays, the proof is found in K.Itô: Lectures on stochastic processes; p. 136ff. Another possible proof using stochastic calculus includes Continuous Processes with Independent Increments. And ideas from here A simple characterization of the Brownian Motion. We recreate the proof here.

1.By uniform continuity of the process (continuity on compact implies uniform continuity by Heine-Cantor and so we can pick uniform $\delta$ for each epsilon) we have for each $\epsilon>0$

$$P[\sup_{|t-s|\leq \delta(\epsilon),t,s\in [t_{0},t_{1}]}|X_{t}-X_{s}|< \epsilon]\geq 1-\epsilon.$$

2.Fix sequence $\epsilon_{n}\to 0$ and a corresponding partition for the interval $[t_{0},t_{1}]$

$$t_{0}=t^{n}_{i}<...<t_{p_{n}}^{n}=t_{1}\text{ and }t_{i+1}^{n}-t_{i}^{n}<\delta(\epsilon_{n})$$

for some $p_{n}\to +\infty$. We set

$$X^{n}_{k}:=(X^{n}_{t_{k}^{n}}-X^{n}_{t_{k-1}^{n}})1_{|X^{n}_{t_{k}^{n}}-X^{n}_{t_{k-1}^{n}}|\leq \epsilon_{n}}$$

and

$$S_{n}=\sum_{k=1}^{p_{n}}X^{n}_{k}.$$

3.We have

$$P[X_{t_{1}}-X_{t_{0}}=S_{n}]\geq P[\sup_{k=1,..,p_{n}}|X^{n}_{t_{k}^{n}}-X^{n}_{t_{k-1}^{n}}|< \epsilon_{n}]\geq P[\sup_{|t-s|\leq \delta(\epsilon_{n})}|X_{t}-X_{s}|< \epsilon_{n}]\geq 1-\epsilon_{n}.$$

And so $S_{n}\to X=X_{t_{1}}-X_{t_{0}}$ in probability. It remains to show that

$$E[e^{i\alpha S_{n}}]\to e^{im\alpha-\frac{V}{2}\alpha^{2}},$$

for some $m,V$ parameters. Consider the following

$$m^{n}_{k}=E[X^{n}_{k}],V^{n}_{k}=V[X^{n}_{k}],m_{n}:=\sum_{k=1}^{p_{n}}m^{n}_{k},V_{n}:=\sum_{k=1}^{p_{n}}V^{n}_{k}.$$

4.By independence of the increments and by Taylor-expansion we have

$$E[e^{i\alpha S_{n}}]=e^{i\alpha m_{n}}\prod_{k=1}^{p_{n}}E[e^{i\alpha( X_{k}^{n}-m_{k}^{n})}]=e^{i\alpha m_{n}}\prod_{k=1}^{p_{n}}\left(1-\frac{\alpha^{2}}{2}V^{n}_{k}(1+O(\epsilon_{n}) \right),$$

where we also used that we defined the sequence to satisfy $|X_{k}^{n}|\leq \epsilon_{n}$.

5.Showing that $V_{n}$ converges to some $V$.

Since we have convergence in probability we have by Convergence in probability implies convergence of characteristic functions

$$|E[e^{i\alpha ( X_{t_{1}}-X_{t_{0}})}]|=\lim_{n\to+\infty}|E[e^{i\alpha S_{n}}]|\leq \liminf_{n\to+\infty}\prod_{k=1}^{p_{n}}e^{-\frac{\alpha^{2}}{4}V^{n}_{k}}=\liminf_{n\to+\infty}e^{-\frac{\alpha^{2}}{4}V_{n}},$$ where we also used the inequality

$$1-\theta\leq e^{-\theta/2},$$

for small enough $\theta>0$, which is indeed the case here since $V_{k}^{n}\leq \epsilon_{n}^{2}$. As mentioned here Showing $\varphi(t)\neq 0$ when $\varphi$ is a characteristic function of an infinitely divisible distribution, the LHS is nonzero $|E[e^{i\alpha ( X_{t_{1}}-X_{t_{0}})}]|\neq 0$. Therefore, $V_{n}$ is a bounded monotone sequence and so there exists a convergence subsequence to some point value $V\geq 0$ (if zero then we get the degenerate Gaussian as mentioned).

Coming back to the computation we get

$$\prod_{k=1}^{p_{n}}\left(1-\frac{\alpha^{2}}{2}V^{n}_{k}(1+O(\epsilon_{n}) \right)\to e^{-\frac{\alpha^{2}}{2}V}.$$

6.Showing that $m_{n}$ converges to some $m$. Suppose that $|m_{n}|$ is unbounded. Then for every small $\beta>0$ we have for $\phi(\alpha):=e^{\frac{\alpha^{2}}{2}V}E[e^{i\alpha X}]$

$$\left|\int_{0}^{\beta}\phi(\alpha)d\alpha\right|=\lim_{n\to +\infty}\left|\int_{0}^{\beta} e^{i\alpha m_{n}}d\alpha\right|\leq \lim_{n\to +\infty}\left|\frac{e^{i\beta m_{n}}-1}{im_{n}}\right|\leq \lim_{n\to +\infty}\frac{2}{|m_{n}|}=0.$$

This again contradicts that the chf for an infinitely divisible processes is non-zero.

Therefore, by Bolzano-Weierstrass the bounded sequence $|m_{n}|$ has convergent subsequence to some limit $r$ and so by possibly picking a further subsequence we get $m_{n_{k}}\to m$ for some $m$ with $|m|=r$.

7.Conclusion. We showed that

$$E[e^{i\alpha S_{n}}]\to e^{im\alpha} e^{-\frac{\alpha^{2}}{2}V}.$$