In my statistics class we showed that for Ordinary Least Squares regression:
$$ var[\hat{\beta}_{OLS}] = (X^TX)^{-1}X^T\sigma^2X(X^TX)^{-1} $$
and for Generalized Least Squares regression (ie. allowing the errors to be heteroskedastic):
$$ var[\hat{\beta}_{GLS}] = (X^T(\sigma^2)^{-1}X)^{-1} $$
where for GLS the variance matrix is still diagonal but the components are not necessarily all equal as they are assumed to be for OLS.
We were told that
$$ var[\hat{\beta}_{GLS}] < var[\hat{\beta}_{OLS}] $$
How can we prove that this is the case?