ML estimator of generalized least squares

56 Views Asked by At

I'm hung up on the following question, which I know how to do (part of) in concept, but the actual technique fails me.

Show that the maximum likelihood of $\beta$ is $$ {\bf b}_{\mathrm GLS} = \left(\bf X^\top \Sigma_{\epsilon\epsilon}^{-1}X\right)^{-1} {\bf X^\top \Sigma_{\epsilon\epsilon}^{-1} y} $$ and that its sampling variance is $$ V({\bf b}_{\mathrm GLS}) = \left( {\bf X^\top \Sigma_{\epsilon\epsilon}^{-1} X}\right)^{-1} $$

I know that to solve the first part, I need to find the zero of the partial derivative with respect to $\beta$ of the generalized sum of squares, $\left( {\bf y^\top - X \beta)^\top \Sigma_{\epsilon\epsilon}^{-1} (y - X \beta} \right)$. The problem is, I'm not sure how to do this. It would be generous to say I have very little experience with matrix calculus. I know how to differentiate with respect to the flanking (?) elements of a quadratic form in y, but this is of little help here because I need only to differentiate with respect to $\beta$, and those flanking terms are the entire error. This would be $\frac{\partial {\mathrm GLS}}{\partial {\bf y - X \beta}} = 2 \Sigma_{\epsilon\epsilon}^{-1}\left( {\bf y - X \beta} \right))$ (I think). This appears to be pretty useless. Rewriting GLS as ${\bf y^\top \Sigma_{\epsilon\epsilon}^{-1} y - (X\beta)^\top \Sigma_{\epsilon\epsilon}^{-1} (X\beta)}$ may be of some help; I'm not sure.

As for the second part, I'm pretty dead in the water. This is one of those "it is simple to show..." type situations.