Convergence sequence of random variables

122 Views Asked by At

I have this problem about a sequence of normals. $(X_n)_{n\geq 0}$ is defined as $$X_{n+1}=aX_n+U_{n+1}$$ $X_0=0$, where $(U_n)_{n\geq1}$ is a sequence of i.i.d random variable normally distributed $\mathcal{N}(m,\sigma^2)$, I have to study (for $a<1$) the convergence in distribution of $(X_n)_{n\geq1}$ and tell (for $a>1$) if $$Y_n:=\frac{X_n}{a^{n-1}}$$ converges in $L^2(P)$

What I have already done:

I already wrote $X_n$ as $\sum_{i=1}^n a^{n-i} U_i$ and computed its mean and variance (resp $E[X_n]=m\frac{a^n-1}{a-1}$ and $var[X_n]=\sigma^2 \frac{a^{2n}-1}{a^2-1}$) but now I'm stuck, the sequence $U_n$ is not even parametrized...should I simply take the limit for $n \to \infty$ of its mean and variance?

1

There are 1 best solutions below

9
On

Your expression for $X_n$ is not quite right: that's actually $X_{n+1}$.

Since the question talks about convergence in distribution, you do want to take limits as $n \to \infty$. Note that a sum of independent normal random variables is normal.