I'm reading Bernt Oksendal's "Stochastic Differential Equations" and this is one of the proof that I'm totally lost.
This is from Ch2.2, page 12-13 (sixth edition).
First, Brownian motion is defined as
$$P^x(B_{t_1}\in F_1, \cdots, B_{t_k}\in F_k) := \\ \int\limits_{F_1 \times \cdots \times F_k}p(t_1, x, x_1)\cdots p(t_k-t_{k-1}, x_{k-1}, x_k)dx_1 \ldots dx_k, \tag{2.2.2}$$ where $$p(t,x,y) := (2\pi t)^{-n/2}\cdot \exp(-\frac{|x-y|^2}{2t})$$
Then, it says, Brownian motion $B_t$ is Gaussian Process, i.e. for all $0 \leq t1 \leq \cdots \leq t_k$ the random variable $Z = (B_{t_1}, \ldots, B_{t_k} ) \in \mathbb{R}^{nk}$ has a (multi)normal distribution. This means that there exists a vector $M \in \mathbb{R}^{nk}$ and a non-negative definite matrix $C = [c_{jm}] \in \mathbb{R}^{nk\times nk}$ such that
$$E^x\left[\exp\left(i\sum_{j=1}^{nk}u_jZ_j\right)\right] = \exp\left(-\frac{1}{2}\sum_{j,m}u_jc_{jm}u_m+i\sum_j u_j M_j\right) \tag{2.2.3},$$ for all $u = (u_1, \ldots , u_{nk}) \in \mathbb{R}^{nk}$, where $i =\sqrt{-1}$ is the imaginary unit, and $E^x$ denotes expectation with respect to $P^x$.
Moreover, if (2.2.3) holds then $M = E^x[Z]$ is the mean value of $Z$ (2.2.4), and $c_{jm} = E^x[(Z_j - M_j)(Z_m -M_m)]$ is the covariance matrix of $Z$ (2.2.5).
Then it goes to the proof:
To see that (2.2.3) holds for $Z = (B_{t_1}, \ldots, B_{t_k} ) $ we calculate its left hand side explicitly by using (2.2.2) (see Appendix A -- something about multi-normal distribution) and obtain (2.2.3) with
$$M=E^x[Z]=(x, x, \cdots, x)\in \mathbb{R}^{nk} \tag{2.2.6}$$
and $$C=\begin{pmatrix} t_1 I_n & t_1 I_n & \cdots & t_1 I_n \\ t_1 I_n & t_2 I_n & \cdots & t_2 I_n\\ \vdots & \vdots & & \vdots \\ t_1 I_n & t_2 I_n & \cdots & t_k I_n \end{pmatrix} \tag{2.2.7} $$
Hence $$E^x[B_t] = x \text{, for all} t\geq 0 \tag{2.2.8}$$ and $$E^x[(B_t-x)^2]=nt, E^x[(B_t-x)(B_s-x)]=n \min(s,t) \tag{2.2.9}$$
I'm totally lost on how it just jump to the conclusion of (2.2.6) - (2.2.9).
(2.2.2) seems quite far away from (2.2.3), how could it claim (2.2.3) stands because of (2.2.2) and could give the result for $M$ and $C$?
Also I don't understand how come (2.2.7) looks so weird, the foot note is not normal (i,j) format, I could not figure out how this conclusion was reached....
A possible solution to avoid painful computations, using standard results, is the following. For the sake of simplicity, let us consider the 1-dimensional case and $x=0$ (the idea for those cases is the same, but the computations are longer).