Convergence for a Random Normal Process defined by Recursion

185 Views Asked by At

A question from my Random Processes exams:

Let $ W_0,W_1,W_2,...$ be a sequence of independent Gaussian random variables with Mean 0 and Variance $ \sigma ^ 2 > 0 $.

Define the sequence $ (X_n : n \geq 0) $ recursively by $ X_0 = 0 $ and $ X_{k+1} = \frac{X_k + W_k}{2}$. Determine whether the sequence $(X_n)$ converges in each of the four senses, namely mean square, almost sure, probability and distribution.

My work:

$$ X_1 = \frac{W_0}{2}$$ $$ X_2 = \frac{W_1 + X_1}{2} = \frac{W_0}{2^2} + \frac{W_1}{2}$$

Thus,

$$X_n = \sum_{i=0}^{n-1}{\frac{W_k}{2^{n-k}}}$$

Now, how to utilise the fact that these are $\pmb{i.i.d. RV's}$ and get to different convergences?

Sum of Normal $\pmb{i.i.d. RV's}$ Normal.

Thus, $\mu(X_n) = 0$

Also, $$Var(X_n) = Var\bigl(\sum_{i=0}^{n-1}{\frac{W_k}{2^{n-k}}}\bigr)=\bigl(\sum_{i=0}^{n-1}{\frac{Var(W_k)}{4^{n-k}}}\bigr)$$

$$Var(X_n) = \sigma ^ 2 \sum_{i=0}^{n-1}{\frac{1}{4^{n-k}}} = \sigma ^ 2 \bigl( \frac{\frac{1}{4}(1-\frac{1}{4^n})}{1-\frac{1}{4}} \bigr)$$

$$\lim_{n\to\infty} Var(X_n) = \frac{\sigma ^ 2}{3} $$

How, to move forward with this?

1

There are 1 best solutions below

0
On BEST ANSWER

You've already shown that $\ \big(X_n\big)\ $ converges in distribution to $\ \mathscr{N}\left(0,\frac{\sigma^2}{3}\right)\ $. Since almost sure convergence and convergence in mean square both imply convergence in probability, then if you can show that $\ \big(X_n\big)\ $ doesn't converge in probability, as I believe you can, then it can't converge almost surely or in mean square either.

Suppose $\ \big(X_n\big)\ $ converges in probability to $\ X\ $. Then, for any positive integers, $\ m,n\ $, and any positive real $\ \epsilon\ $, \begin{align} P\big(|X-X_m|>\epsilon\big)&=P\big(|X_n-X_m-(X_n-X)|>\epsilon\big)\\ &\ge P\big(|X_n-X_m|-|X_n-X|>\epsilon\big)\\ &\ge P\big(\{|X_n-X_m|>2\epsilon\}\setminus \{|X_n-X|>\epsilon\}\big)\\ &\ge P\big(|X_n-X_m|>2\epsilon)-P\big(|X_n-X|>\epsilon\big)\ . \end{align}

Since $\ W_0, W_1,\dots, W_n, \dots\ $ are independent normal, then $\ X_n-X_m\ $ is normal with mean $0$ and variance \begin{align} &\hspace{1em}Var\big(X_n\big)+Var\big(X_m\big)-2E(X_nX_m)\\ &=\frac{\sigma^2\left(1-4^{-n}\right)}{3}+\frac{\sigma^2\left(1-4^{-m}\right)}{3}-\frac{\sigma^2\left(1-4^{-\min(m,n)}\right)}{2^{m+n-1}3}\\\ &\ge\frac{\sigma^2}{3} \end{align} for all $\ m, n\ $. Under the supposition that $\ \big(X_n\big)\ $ converges in probability to $\ X\ $, there must exist a positive integer $\ N\ $ such that $\ P\big(|X_N-X|>\frac{\sigma}{2\sqrt{3}}\big)<1-\mathscr{N}(0,1)(1)\ $. Then for any$\ m\ $, we have from above that \begin{align} P\bigg(|X-X_m|>\frac{\sigma}{2\sqrt{3}}\bigg)&\ge P\left(|X_N-X_m|>\frac{\sigma}{\sqrt{3}}\right)-P\left(|X_N-X|>\frac{\sigma}{2\sqrt{3}}\right)\\ &\ge P\left(|X_N-X_m|>\sqrt{Var\big(X_N-X_m\big)}\right)\\ &\hspace{2em}-\big(1-\mathscr{N}(0,1)(1)\big)\\ &=1-\mathscr{N}(0,1)(1)\ , \end{align} from which it follows that $\ \big(X_n\big)\ $ does not converge in probability to $\ X\ $, thus contradicting the original assumption that it did.