Given a Brownian motion $W$, and $k \in (a,b)$, I'm trying to find the distribution of $W(k)$ in terms of $W(b)$, $W(a)$, and $k$

48 Views Asked by At

I'm trying to perform this "interpolation" because I ultimately am trying to write a small library to simulate stochastic processes. I realized I might need to figure out what is the distribution of $W(k)$ when I already know the values of $W(a)$ and $W(b)$ during a sample run.

Here's what I got so far $$ W(k) = \frac{1}{2} (W(k)-W(a)) + \frac{1}{2}W(a) + \frac{1}{2}W(b) - \frac{1}{2}(W(b)-W(k)) $$ I know the first and last summands are independent normal r.v.s $$ W(k) = \frac{1}{2} (W(b)+W(a))+ \frac{1}{2} ( N(0,k-a)-N(0,b-k)) \\ $$ which results in $$ W(k) = \frac{W(b)+W(a)}{2} + \frac{1}{2} N(0,b-a) $$

The distribution of $W(k)$ doesn't seem to depend on $k$ at all, which is very counter-intuitive. For example, if during a sample I see $W(0)=0$ and $W(1)=1$, I refuse to believe that $W(0.1)$ has the same distribution as$W(0.99)$--namely, $N(0.5,0.25)$. I must be doing something wrong, but can't see what.

The formal manipulations seem correct. Am I committing some assumption of independence that isn't true.

2

There are 2 best solutions below

1
On

To see what's happening, set $X = \frac{W(b) + W(a)}{2}$ and $Y = W(k) -X$. I agree that your computation shows $Y \sim \frac{1}{2} N(0,b-a)$, regardless of the value of $k$. And the distribution of $X$ certainly doesn't depend on $k$.

But this does not imply that the distribution of $W(k) = X+Y$ is the same for all $k$, because the covariance of $X$ and $Y$ does depend on $k$. (I will leave you to compute it.)

Put another way, to conclude that the distribution of $W(k)$ was the same for all $k$, you'd need to show that the joint distribution of $(X,Y)$ was the same for all $k$. But you've only shown that the marginal distributions are the same for all $k$.

0
On

Turns out I was making my life more difficult by trying to be more clever than I actually am.

Using the conditional distribution of a multivariate normal random variable (https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions) makes the calculation simple if a bit long.

Using Wikipedia's notation (mostly): $$ x = \begin{bmatrix} W(k) \\ W(a) \\ W(b) \end{bmatrix} $$ with $ x_1 = \begin{bmatrix} W(K) \end{bmatrix} $ and $ x_2 = \begin{bmatrix} W(a) \\ W(b) \end{bmatrix}$ and $$ \Sigma = \begin{bmatrix} k & a & k \\ a & a & a \\ k & a & b \end{bmatrix} = \begin{bmatrix} \begin{bmatrix} a \end{bmatrix} & \begin{bmatrix} a & k \end{bmatrix} \\ \begin{bmatrix} a \\ k\end{bmatrix} & \begin{bmatrix} a & a \\ a & b \end{bmatrix} \end{bmatrix} = \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{bmatrix} $$ So $ x \sim N(0,\Sigma) $, since the mean of any $W(.)$ is 0. So the distribution of of this multivariate random variable when we're given $x_2 = u = \begin{bmatrix} u_a \\ u_b \end{bmatrix}$ will be a normal random variable (usually multivariate normal, but here it's univariate since $x_1$ has dimension 1).

The mean of this new random variable is $$ \mathbb{E}(x_1) + \Sigma_{12} \Sigma_{22}^{-1} (u-\mathbb{E}(x_2)) = \Sigma_{12} \Sigma_{22}^{-1} u = \frac{b-k}{b-a}u_a + \frac{k-a}{b-a}u_b $$ and the variance is $$ \Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21} = \frac{(b-k)(k-a)}{b-a} $$ This $$ W(k) \sim N\left(\frac{b-k}{b-a}u_a + \frac{k-a}{b-a}u_b ,\frac{(b-k)(k-a)}{b-a}\right) $$

Which "feels" like it should be true.

EDIT: I double checked the process by going through the same calculations but with $a<b<k$, and got the right result $W(k) \sim N(u_b,k-b)$.