Why is the inverse of the sample covariance matrix a biased estimator of the true precision matrix?

1.2k Views Asked by At

So this paper makes a claim that sample covariance $S$ is an unbiased estimator of the true covariance $\Sigma$ - makes sense.

However, if we make the inverse of said matrices $S^{-1}$ is no longer an unbiased estimator of $\Sigma^{-1}$. In fact: $$E(S^{-1})=\frac{T}{T-N-2}\Sigma^{-1}$$ I am currently looking for some explanation for this, preferably with some intuition behind it.

All help would be greatly appreciated!

1

There are 1 best solutions below

0
On

I would guess that it is a consequence of a Jensen inequality for real symmetric matrices. For some r.v $X$ with $var(X)=\sigma^2$. Let $g$ b some convex function, e.g., $g(x)=1/x$ for $x>0$, such that $g'(x)<0$ and $g''(x)>0$ for $x>0$. Thus if $S$ is unbiased estimator of $\sigma^2$ then $1/S$ is biased estimator for $1/\sigma^2$ for every finite $n$ and non-degenerate $X$, i.e,
$$ \mathbb E g(S)= \mathbb E S^{-1} \ge g(\mathbb ES) = g(\sigma^2)=\sigma^{-2}. $$ A generalization for $X$ with uncorrelated entries is straight-forward as $cov(X) = diag(\sigma_1^2,...,\sigma_n^2)$ and for correlated components I guess it will require the use of a spectral decomposition of $\Sigma$ in order to prve the statement.