(People who followed Bayesian tag, please read the third paragraph)
Problem: I need to calculate pseudo-determinant of a matrix (preferably in MATLAB, but no built-in function is available. I can write one). I see two different methods to calculate pseudo-determinants. One from Wikipedia and one from Maple online help page - search for ipseudo - I don't use Maple but I tried to read that method and I couldn't understand it.
Wikipedia method (the first equation on that page) is easy to implement but I am unsure about its correctness. Essentially, I tried to find the lowest alpha which makes $det(A+\alpha*I)$ non-zero. This method gives determinant values which are: 4e-11 and 5e+14. As I change the definition of tending to zero (i.e. I was using if determinant>1e-10 or >1e-100 etc.), the values jump from 4e-11 and 5e+14. So I am pretty sure its not correct. Anybody knows how to calculate pseudo-determinant of 200x200 matrix?
Interested people can read further. I want to give background to this problem as to why I am calculating pseudo-determinant and want to take your opinion on is this even necessary?
I want to use Bayes' theorem and caluclate likelihood. Since my feature space is 200 dimensional, I decided to use multivariate Gaussian distribution (MVN). I calculate covariance matrix and then find out that it is singular. So I read further on that page and come across a Degenerate case which handles singular covariance matrices. That's where I am told to calculate pseudo-determinant of the matrix. Is this needed? I feel it is, otherwise how to I calculate likelihood?
Isn't the covariance matrix symmetric? If so, doesn't singular value decomposition do what you want? TO be precise, compute the SVD, and take the product of non-zero singular values.