How to prove Brownian motion is Gaussian Process?

4.8k Views Asked by At

I'm reading Bernt Oksendal's "Stochastic Differential Equations" and this is one of the proof that I'm totally lost.

This is from Ch2.2, page 12-13 (sixth edition).

First, Brownian motion is defined as

$$P^x(B_{t_1}\in F_1, \cdots, B_{t_k}\in F_k) := \\ \int\limits_{F_1 \times \cdots \times F_k}p(t_1, x, x_1)\cdots p(t_k-t_{k-1}, x_{k-1}, x_k)dx_1 \ldots dx_k, \tag{2.2.2}$$ where $$p(t,x,y) := (2\pi t)^{-n/2}\cdot \exp(-\frac{|x-y|^2}{2t})$$

Then, it says, Brownian motion $B_t$ is Gaussian Process, i.e. for all $0 \leq t1 \leq \cdots \leq t_k$ the random variable $Z = (B_{t_1}, \ldots, B_{t_k} ) \in \mathbb{R}^{nk}$ has a (multi)normal distribution. This means that there exists a vector $M \in \mathbb{R}^{nk}$ and a non-negative definite matrix $C = [c_{jm}] \in \mathbb{R}^{nk\times nk}$ such that

$$E^x\left[\exp\left(i\sum_{j=1}^{nk}u_jZ_j\right)\right] = \exp\left(-\frac{1}{2}\sum_{j,m}u_jc_{jm}u_m+i\sum_j u_j M_j\right) \tag{2.2.3},$$ for all $u = (u_1, \ldots , u_{nk}) \in \mathbb{R}^{nk}$, where $i =\sqrt{-1}$ is the imaginary unit, and $E^x$ denotes expectation with respect to $P^x$.

Moreover, if (2.2.3) holds then $M = E^x[Z]$ is the mean value of $Z$ (2.2.4), and $c_{jm} = E^x[(Z_j - M_j)(Z_m -M_m)]$ is the covariance matrix of $Z$ (2.2.5).

Then it goes to the proof:

To see that (2.2.3) holds for $Z = (B_{t_1}, \ldots, B_{t_k} ) $ we calculate its left hand side explicitly by using (2.2.2) (see Appendix A -- something about multi-normal distribution) and obtain (2.2.3) with

$$M=E^x[Z]=(x, x, \cdots, x)\in \mathbb{R}^{nk} \tag{2.2.6}$$

and $$C=\begin{pmatrix} t_1 I_n & t_1 I_n & \cdots & t_1 I_n \\ t_1 I_n & t_2 I_n & \cdots & t_2 I_n\\ \vdots & \vdots & & \vdots \\ t_1 I_n & t_2 I_n & \cdots & t_k I_n \end{pmatrix} \tag{2.2.7} $$

Hence $$E^x[B_t] = x \text{, for all} t\geq 0 \tag{2.2.8}$$ and $$E^x[(B_t-x)^2]=nt, E^x[(B_t-x)(B_s-x)]=n \min(s,t) \tag{2.2.9}$$

I'm totally lost on how it just jump to the conclusion of (2.2.6) - (2.2.9).

(2.2.2) seems quite far away from (2.2.3), how could it claim (2.2.3) stands because of (2.2.2) and could give the result for $M$ and $C$?

Also I don't understand how come (2.2.7) looks so weird, the foot note is not normal (i,j) format, I could not figure out how this conclusion was reached....

1

There are 1 best solutions below

3
On BEST ANSWER

A possible solution to avoid painful computations, using standard results, is the following. For the sake of simplicity, let us consider the 1-dimensional case and $x=0$ (the idea for those cases is the same, but the computations are longer).

  1. Define $T(x_1,\ldots,x_k):=\left(\sum_{i=1}^j x_i\right)_{j=1}^k=(x_1,x_1+x_2,\ldots, x_1+x_2+\ldots+x_k)$.
  2. Rewrite the p.d.f. of (2.2.2) as: $$ \dfrac{1}{\prod_{j=1}^k \left(2\pi(t_j-t_{j-1})\right)^{1/2}}\text{exp} \left(\sum_{j=1}^k\dfrac{(z_j-z_{j-1})}{2(t_j-t_{j-1})} \right) $$
  3. Use the change of variables theorem (it is a straightforward formula), noting that $\det{T}=1$, to show that $X:=T^{-1}(Z)$ has a density of the form: $$ \prod_{j=1}^k\rho^\mathcal{N}_{t_j-t_{j-1}}(x_j), $$ where $\rho^\mathcal{N}_{\sigma^2}$ is the p.d.f. of a normal variable with mean zero and variance $\sigma^2$.
  4. It is standard now, as the density is the one of a Gaussian vector with independent components, that the characteristic function is: $$ E^0\left[\exp\left(i\sum_{j=1}^{k}u_jX_j\right)\right] = \exp\left(-\frac{1}{2}\sum_{j,m}u_j\tilde{c}_{jm}u_m\right) \tag{2.2.3'} $$ where $$ \tilde{C}=(\tilde{c}_{ij})_{i,j}=(\delta_{ij}(t_j-t_{j-1}))_{i,j} $$ with $t_0:=0$.
  5. To finally obtain the characteristic function of $Z$, we know that it is going to be of the form of (2.2.3) with $$ C=M_T \tilde{C} M_T^t $$ where $M_T$ is the associated matrix of $T$, i.e., $(M_T)_{ij}\equiv M_{ij}=\sum_{l=1}^{i}\delta_{lj}$, $$ M_T= \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & \ldots & 0 \\ 1 & 1 & 0 & 0 & 0 & \ldots & 0 \\ 1 & 1 & 1 & 0 & 0 & \ldots & 0 \\ &\ldots &\ldots &\ldots &\ldots&\ldots \\ 1& 1& 1& 1& 1& 1&1 \end{bmatrix} . $$ Thus, $$ c_{ij}=\sum_{l,m}\left(\sum_{r=1}^{i}\delta_{rl}\delta_{lm}(t_l-t_{l-1})\sum_{s=1}^{j}\delta_{sm}\right)= \sum_{r=1}^{i}\sum_{s=1}^{j}\delta_{rs}(t_r-t_{r-1})=\sum_{r=1}^{m}(t_r-t_{r-1})=t_m $$ where $m:=\min\{i,j\}$. So we get (2.2.3) and (2.2.7).