Understanding Moving Average Models

70 Views Asked by At

Can someone explain why we use past errors to predict future data values in Moving Average models? It just doesn't make sense why we use past errors to make predictions. Using past values, as in $\text{AR}$ models, makes sense. But why past errors? Particularly for models with only Moving Average components -- would anybody ever use $\text{ARMA}(0,1)$ models and why?

1

There are 1 best solutions below

0
On

Let's say you have a time series $(x_t)_{t\in\mathbb N}$ and you observe it fluctuates around a number $\mu$. Then you may postulate that $$x_t = \mu + \epsilon_t.$$

Using some observations you compute $\hat\mu$. With this number, you may compute your estimated deviations as $\hat\epsilon_t = x_t - \hat\mu$ and plot it. You observe that the sign of these tend to flip, ie. if $\epsilon_t>0$, then you tend to observe $\epsilon_{t+1}<0$. This suggest a negative autocorrelation for $\epsilon_t$, so now you may think "perhaps $\epsilon$ follows an AR(1)" and write $$\epsilon_t = \theta\epsilon_{t-1} + \eta_t.$$

You plug this back in to obtain $$x_t = \mu + \eta_t + \theta\epsilon_{t-1},$$ which is not exactly an MA(1) but some notation changes may be cleverly done.