Distribution of Difference of Independent Random Variables

146 Views Asked by At

Usually in the development of the theory of Brownian motion, one makes the assumption that $X_t$ (the coordinate functions on $(\mathbb{R}^*)^{[0,\infty)}$). have normal distributions with mean $0$ with variance $s$ and that the distribution of $X_t - X_s$ is normal as well with mean $0$ and variance $t-s.$ See, e.g., $\S$10.5 of Folland.

The standard mathematical model for this is the Wiener process, and by the Riesz-Markov-Kakutani representation theorem, it is possible to construct a probability space $(\Omega, \mathcal{B}, P)$ with the joint distribution of $X_{t_1}, \ldots, X_{t_n}$ being exactly the prescribed normal distribution, namely $$\mathrm{d}P_{t_1,\ldots,t_n} = \left[\prod_{j=1}^{n} 2\pi (t_j-t_{j-1})\right]^{-\frac{1}{2}} \exp\left(\sum_{j=1}^{n}-\frac{(x_j-x_{j-1})^2}{2(t_j-t_{j-1}}\right) \mathrm{d}x_1\ldots\mathrm{d}x_n,$$ where $0 = t_0 < t_1 < \ldots < t_n,$ and $\mathrm{d}x_1\ldots\mathrm{d}x_n$ is $n$-dimensional Lebesgue measure. Note that because the coordinate functions are independent, the left hand side is the same as the product measure of the individual distributions.

Given this, my professor showed that $X_t - X_s$ has distribution $(2\pi(t-s))^{-1/2}\exp\left(-\tfrac{x^2}{2(t-s)}\right)\mathrm{d}x$ by considering the linear transformation $T:\mathbb{R}^2 \to \mathbb{R}^2$ given by $T(y_1,y_2) = (y_1, y_1+y_2),$ which in particular has determinant $1.$ The main point of the argument was that this transformation also affects the distribution in the same manner, so that the joint distributions of $(X_s, X_t)$ and $(X_s, X_t - X_s)$ can be derived from each other.

My question is if someone could supply the details of this argument. In my notes, I have something with differential forms and pushforwards, but it's really not clear to me. Intuitively, it makes sense, but I'd like to see an explicit answer.

1

There are 1 best solutions below

1
On BEST ANSWER

The case $n=2$ of the formula in your post shows that, for every $s\lt t$, the density $f$ of the distribution of $(X_s,X_t)$ is such that $$ f(x,y)=\frac1{2\pi\sqrt{(t-s)s}}\exp\left(-\frac{x^2}{2s}-\frac{(y-x)^2}{2(t-s)}\right). $$ Recall that $f$ allows to compute the expectation of every measurable (say, bounded) function $u$ of $(X_s,X_t)$, through the formula $$ E[u(X_s,X_t)]=\iint u(x,y)f(x,y)\mathrm dx\mathrm dy. $$ In particular, if $u:(x,y)\mapsto v(y-x)$, one gets $E[v(X_t-X_s)]=(*)$ with $$ (*)=\iint v(y-x)f(x,y)\mathrm dx\mathrm dy. $$ Hence the task is, for every measurable (say, bounded) function $v$, to transform $(*)$ into an integral $$ (**)=\int v(z)g(z)\mathrm dz, $$ in which case we shall know that the density of the distribution of $X_t-X_s$ is $g$.

A change of variable $(x,y)\mapsto(z,w)$ does exactly this. The choice $z=y-x$ is obvious, the choice $w=x$ is one possible amongst many. Then, as you noted, $\mathrm dx\mathrm dy=\mathrm dz\mathrm dw$, hence $$ (*)=\iint v(z)f(w,z+w)\mathrm dz\mathrm dw=\int v(z)\left(\int f(w,z+w)\mathrm dw\right)\mathrm dz, $$ from which the identification of $g$ as the inner parenthesis is direct, that is, $$ g(z)=\int f(w,z+w)\mathrm dw. $$ This is quite general. Now, for the density $f$ of interest, one gets $$ g(z)=\frac1{\sqrt{2\pi(t-s)}}\exp\left(-\frac{z^2}{2(t-s)}\right)\int\frac1{\sqrt{2\pi s}}\exp\left(-\frac{w^2}{2s}\right)\mathrm dw. $$ The last integral on the RHS being equal to $1$, $g$ is the centered normal density with variance $t-s$, as was expected.

At the end of the day, the fact that the density of some random vector $(\xi,\eta)$ is a function $f$ such that $$ f(x,y)=h(x)g(y-x), $$ for some densities $h$ and $g$ implies that $\xi$ has density $h$, that $\eta-\xi$ has density $g$ and that $(\xi,\eta-\xi)$ is independent.