MA process conditional variance

862 Views Asked by At

I have given a MA(q) process, where $e_t$ is a zero mean white noise process with $\sigma_\epsilon^2$. $I_{t-1}$ is the information available up to time point $t-1$. Now I would like to calculate $\mathit{Var}(Y_t|I_{t-1})$

I would have done it as the following: $$ \mathit{Var}(Y_t|I_{t-1}) = \mathit{E}[(Y - \mathit{E}[Y_t|I_{t-1}])^2 |I_{t-1}] = \mathit{E}[\epsilon^2 |I_{t-1}] =\mathit{E}[\epsilon^2]=\mathit{Var}(\epsilon) = \sigma_\epsilon^2 $$

Is this approach right?

1

There are 1 best solutions below

1
On BEST ANSWER

Your approach is correct. Here, is a long approach.

$Var(Y_t|I_{t-1}) = E(Y_t^2|I_{t-1}) - E(E(Y_t|I_{t-1}))^2 \\ Var(Y_t|I_{t-1}) = E((\sum\limits_{i=1}^q(Y_{t-i} + \epsilon_t)(\sum\limits_{j=1}^qY_{t-j} + \epsilon_t)) - (\sum\limits_{i=1}^qY_{t-i})^2 = E(\epsilon_t^2) = \sigma^2$