Meaning of 'showing that a time series is an MA model'

35 Views Asked by At

Show that $\{X_t+Y_t\}$ can be considered as a stationary MA(2) model, where $\{X_t\}$ is a stationary MA(1) model, $\{Y_t\}$ is a stationary MA(2) model, and $\{X_t\}$ and $\{Y_t\}$ are independent.

I thought showing that $\{X_t+Y_t\}$ is an MA(2) model would follow the same process of showing that the sum is stationary by definition. So I put $X_t=\epsilon_t+a_1\epsilon_{t-1},Y_t=e_t+b_1e_{t-1}+b_2e_{t-2}$ then simply added the two, $X_t+Y_t=(\epsilon_t+e_t)+(a_1\epsilon_{t-1}+b_1e_{t-1})+b_2e_{t-2}$. Here is where I'm stuck. Doesn't that $\{X_t+Y_t\}$ is an MA(2) model mean that it can be expressed as $\eta_t+c_1\eta_{t-1}+c_2\eta_{t-2}$? If so, is it okay to put $\eta_t=\epsilon_t+e_t,c_1\eta_{t-1}=a_1\epsilon_{t-1}+b_1e_{t-1},c_2\eta_{t-2}=b_2e_{t-2}$ ?

I felt like this is not the right approach, so I googled a bit and found this link. On pages 2 to 3 it says "Given that $W(t)$ is a weakly stationary times series, and according to the $\gamma_W(h)$, we assume $W(t)$ is a MA(max(p,q)) process, $W(t) = \epsilon_t + \theta_1\epsilon_{t − 1} + \cdots + \theta_p\epsilon_{t-p}$." How can we say that this assumption is possible? Even though we managed to solve the coefficients, how can we assure that $\epsilon_1,\epsilon_2,\cdots$ are white noise errors?