The inverse of the covariance matrix estimator

113 Views Asked by At

I'm reading a paper, and it says that if $\Sigma$ es the covariance matrix of some random vector $X \in \mathbb{R}^p, X \sim N(\mu_X, \Sigma)$ and $\widehat{\Sigma} = (n-1)^{-1} \sum_{i=1}^{n}(X_i-\bar{X})(X_i-\bar{X})^T$, then $\widehat{\Sigma}^{-1}$ exists with probability $1$. Is there an easy way to prove this? I think that this should use the fact that $\Sigma^{-1}$ exists (since is positive definite), but I think $\widehat{\Sigma}$ is not necessarily positive definite. Thanks

1

There are 1 best solutions below

5
On

You can use that PCA of $\Sigma$, i.e., $\Sigma = P \Lambda P^T$, where $\Lambda = diag(\lambda_1, ..., \lambda_p)$, the eigenvalues $\lambda_i$ are the variance of the $i$th PC, and thus $\lambda_i$ are non-negative by definition. Therefore, $\Sigma$ is positive semi-definite. For non-degenrate random variables, all $\lambda_i > 0$, for $i=1,...,p$, namely, $\Sigma$ is positive definite matrix. The eigenvalues of $\Sigma^{-1}$ are $1/\lambda_i$ respectively, hence the inverse matrix is also positive definite matrix. Same holds for the sample $\hat{\Sigma}$ and $\hat{\Sigma}^{-1}$, where the non-degenerate requirement means that you don't have constant (identical) elements in $X$.