Any trick to swap order of determinant and matrix inverse operation?

72 Views Asked by At

Been thinking through fitting a kind of Gaussian mixture model in more of a neural network style (kind of similar to RNADE or RMADE by Larochelle, without going into details) and see that this could scale really nicely to very large dimensions if I could avoid having to know both $\Sigma$ AND $\Sigma^{-1}$ per the multivariate Gaussian equation. I.e., from what I'm seeing, the Gaussian PDF...

$\det(2\pi\Sigma)^{-\frac12}e^{-\frac12(x-\mu)^t \Sigma^{-1}(x-\mu)}$

seems to require both the covariance and precision matrices. I bet the answer is no, but is there a way I could fit parameters for the precision matrix and avoid having to invert it to get the normalization constant (e.g., if i could swap the order of the determinant and inverse sqrt operation, and just need the $\Sigma^{-\frac12}$, I could do a Cholesky decomposition on the precision matrix).

--update--

just to add to this...kinda asking along the same lines as the property that the transpose of the inverse is the inverse of the transpose. Is such a thing true regarding determinant....i.e. is $\det(\Sigma)^\frac12 = \det(\Sigma^\frac12)$