Transformation step in lag-$l$ autocovariance for linear time series that I can not understand.

24 Views Asked by At

We have a linear time series defined as follows:

$r_t=\mu+\sum_{i=0}^{\infty}\psi_ia_{t-i}$,

where {$a_t$} is a sequence of iid random variables with mean zero and a well-defined dsitribution.

The lag-$l$ autocovariance of $r_t$ is

$ \gamma_l=Cov(r_t,r_{t-l}) \\=E[(\sum_{i=0}^{\infty}\psi_ia_{t-i})(\sum_{j=0}^{\infty}\psi_ia_{t-l-j}))] \\=E(\sum_{i,j=0}^{\infty}\psi_i\psi_ja_{t-i}a_{t-l-j}) \\=\sum_{j=0}^{\infty}\psi_{j+l}\psi_jE(a^2_{t-l-j}) \\=... $

Then from here it continues and I can understand it. But the step I am not able to follow is the last one in the previous equalities.

1

There are 1 best solutions below

1
On BEST ANSWER

Since $(a_i)_{i\geq0}$ is an i.i.d. sequence, $\mathbb E a_ia_j$ is zero if $i\neq j$, and equals $\sigma^2=\mathbb E a_0^2$ if $i=j$, so we only get nonzero terms when $i=l+j$. You also need monotone convergence to justify the equality, if $\psi_i\geq0$ for all $i$, or without this assumption you might use dominated convergence if $\sum_i |\psi_i|<\infty$