Autocorrelation and var-cov matrix

107 Views Asked by At

$$Y_t=\beta_1+\beta_2 X_{t2}+\dots +\beta_k X_{tk}+\epsilon_t \qquad (t=1,\dots,T)$$
$$\epsilon_t=\rho \epsilon_{t-1}+v_t, \qquad v_t \sim \mathrm{i.i.d.}(0,\sigma^2_v)$$

GLS estimation under AR(1) errors:
$Y=X\beta +\epsilon$, $\epsilon \sim (0,\Phi)$
$$\Phi=\frac{\sigma^2_v}{1-\rho^2}\begin{pmatrix} 1 &\rho&\rho^2&\dots&\rho^{T-1}\\ \rho&1&\rho&\dots\\ \dots \\ \rho^{T-1}&\dots\ & \dots & \dots & 1 \end{pmatrix} =\sigma^2_\epsilon \Psi$$ $$\hat\beta_{GLS}=(X' \Psi^{-1}X)^{-1}X'\Psi^{-1}Y$$

The book says "transform and apply OLS",
$$P'P=\Psi^{-1}=\begin{pmatrix} 1 &-\rho&0&\dots&0\\ -\rho&1+\rho^2&-\rho&\dots\\ \dots \\ 0&\dots\ & \dots & \dots & 1 \end{pmatrix}$$ and $$P=\begin{pmatrix} \sqrt{1-\rho^2} &0&0&\dots&0\\ -\rho&1&0&\dots\\ \dots \\ 0&\dots\ & \dots & \dots & 1 \end{pmatrix}$$

However I can't follow that process.
How can I get $P'P$ and $P$?

2

There are 2 best solutions below

0
On BEST ANSWER

If I read you correctly, you're asking two questions:

  1. How to get $P$?

  2. Once you get $P$, what to do?

Basically you're asking how GLS reduces to OLS.

Answer:

  1. Diagonalize the positive definite matrix $\Psi$. You would have diagonalized $f(\Psi)$ for any "reasonable" $f$, in particular $f(x) = \frac{1}{x}$. This gives you $P = \Psi^{-\frac{1}{2}}$.

  2. Now transform your model

$$ PY = PX \beta + P \epsilon. $$

The transformed error terms $P \epsilon$ are now spherical. So the transformed model satisfies the usual linear model assumption under which OLS is suitable. The OLS formula then gives you

$$ \hat{\beta} = (X^T P^T P X)^{-1} X^T P^T P Y = (X^T \Psi^{-1} X)^{-1} X^T \Psi^{-1} Y. $$

0
On

You can regress $Py$ on $PX$ if you know $\rho$. If you had only a constant and a single explanatory variable, your data would look like this:

$$Py=\begin{bmatrix}\sqrt{1-\rho^2}y_1 \\ y_2 - \rho y_1 \\ ... \\y_T-\rho y_{T-1}\end{bmatrix},PX=\begin{bmatrix}\sqrt{1-\rho^2} & \sqrt{1-\rho^2}x_1 \\ 1- \rho & x_2 - \rho x_1 \\ ... & ...\\1-\rho & x_T-\rho x_{T-1} \end{bmatrix},$$

In the olden days, before nonlinear least squares was easy, you would use the Conchrane-Orcutt or Prais-Winsten feasible GLS procedures when you don't know $\rho$.