covariance matrix is not positive definite

750 Views Asked by At

I have a feature vector(FV1) of size 1*n. Now I subtract mean of all feature vectors from the feature vector FV1 Now I take transpose of that(FV1_Transpose) which is n*1. Now I add do matrix multiplication (FV1_Transpose * FV1) to get covariance matrix which is n*n.

But my problem is that I dont get a positive definite matrix. I read everywhere that covariance matrix should be symmetric positive definite.

FV1 after subtraction of mean = -17.7926788,0.814089298,33.8878059,-17.8336430,22.4685001;

Covariance matrix = 316.579407, -14.4848289, -602.954834, 317.308289, -399.774811; -14.4848289, 0.662741363, 27.5876999, -14.5181780, 18.2913647; -602.954834, 27.5876999, 1148.38342, -604.343018, 761.408142; 317.308289, -14.5181780, -604.343018, 318.038818, -400.695221; -399.774811, 18.2913647, 761.408142, -400.695221, 504.833496;

This covariance matrix is not positive definite. Any ideawhy is it so?

Thanks in advance.

1

There are 1 best solutions below

0
On BEST ANSWER

Actually what is true is that the covariance matrix should be positive semidefinite. It can have eigenvalues of $0$ corresponding to hyperplanes that all the data lie in.

Now if you have a matrix that is positive semidefinite but not positive definite, but your computation is numerical and thus incurs some roundoff error, you may end up with a matrix that has some small negative eigenvalues. That is presumably what has happened here, where two of the eigenvalues are approximately -0.0000159575212286663 and -0.0000136360857634093. These, as well as the next two very small positive eigenvalues, should probably be 0. Your matrix is very close to the rank-1 matrix u^T u, where u = [-17.7927, .814089, 33.8878, -17.8336, 22.4685 ]. Thus your data points should all be very close to a line in this direction.