Generalized Least Squares results

282 Views Asked by At

So, I've got the next problem:

Let $Y\sim N_n(X\beta, \sigma^2 V)$. Prove that, if $\hat{\beta} = (X^{\prime}V^{-1}X)^{-1}X^{\prime}V^{-1}Y$ then:

  1. $SSR = (Y-X\hat{\beta})^{\prime}V^{-1}(Y-X\hat{\beta}) \sim \sigma^{2}\chi^{2}_{(n-p)}$.
  2. $SSR/(n-p)$ is UMVUE for $\sigma^{2}$.
  3. If $\hat{Y} = X\hat{\beta} = PY$ then $P$ is idempotent but not necessarily symmetric.
  4. $\hat{\beta}$ is BLUE for $\beta$.

To note, the exercise didn't tell anything about the matrix $V$, I'm guessing $V$ is, at least, a semi-positive definite matrix, or even positive-definite since $\sigma^{2}V$ is a covariance matrix...

My attempt:

  1. Reading Seber's Linear regression analysis I realize there's a theorem that says that if $Y\sim N_n(\mu, \Sigma)$ where $\Sigma$ is positive-definite, then $(Y-\mu)^{\prime}\Sigma^{-1}(Y-\mu)\sim \chi^{2}_{n}$.

Since $Y-X\hat{\beta}\sim N_n(0,\sigma^{2}V)$, and $\Sigma = \sigma^2 V$ positive-definite then $SSR = (Y-X\hat{\beta})^{\prime}\Sigma^{-1}(Y-X\hat{\beta})\sim \chi^{2}_{(n)}$, but the exercise says the distribution is $\chi^2_{(n-p)}$, that would be, if I'm not wrong, iff $\operatorname{rank}(\Sigma)=n-p$. If that's so, then how can I prove $\operatorname{rank}(\Sigma)=n-p$ ?

  1. For this, I think the result is trivial once I have proved 1.
  2. I'm totally lost at this, for the idempotent property, it's as simple as

$$P = X\hat{\beta} = X(X^\prime V^{-1}X)^{-1}X^{\prime}V^{-1}$$ $$P^{2} = X(X^\prime V^{-1}X)^{-1}X^{\prime}V^{-1} X(X^\prime V^{-1}X)^{-1}X^{\prime}V^{-1} = X(X^\prime V^{-1}X)^{-1}X^{\prime}V^{-1} = P. $$

But for proving that in general, $P$ is not symmetric I'm confused, should I give a counter example or something?

  1. I've already found that

$$\mathbb{E}[\hat{\beta}] = \beta \mbox{ and } Var(\hat{\beta}) = \sigma^{2}(X^\prime V^{-1}X)^{-1}$$

Is that it to conclude $\hat{\beta}$ is BLUE?

Any help would be appreciated.

1

There are 1 best solutions below

0
On

In order that $V^{-1}$ exist is is necessary that $V$ have a rank equal to the number of its rows or its columns. In that context, positive-semi-definite entails positive-definite. And the statement $Y\sim N_n(X\beta, \sigma^2 V)$ makes sense only if $V$ is positive-semi-definite.

It is correct that $Y-X\beta\sim\operatorname N(0,\sigma^2 V),$ but it is not correct that $Y-X\widehat\beta \sim\operatorname N(0,\sigma^2,V).$ In fact, the variance of $Y-\widehat\beta X$ is a singular matrix of rank $n-p.$ Think about where $\widehat\beta$ comes from.

To prove that $\text{SSR}$ is UMVUE, you need to show that $\text{SSR}$ admits no unbiased estimators of zero, i.e. there is no function $f$ not depending on $\sigma$ for which $\operatorname E(f(\text{SSR}))$ remains equal to $0$ as $\sigma>0$ changes.

Proving that $\widehat\beta$ is BLUE for $\beta$ should not require the assumption of normality, but only the assumptions on the expected value (an $n\times1$ column vector) and the variance (an $n\times n$ matrix) of $Y.$ This is one whose details I've never gone through, as far as I recall. Maybe this one is worth a separate posted question.