Can one factorise a covariance matrix analytically or iteratively?

91 Views Asked by At

I have a covariance matrix which I would like to factorise. In more details, I would like to represent it in the following form:

$ m \approx f \cdot f^T + diag(d^2), $

where $diag(d^2)$ means that I calculate element-wise square of vector d and then use it to construct a squared diagonal matrix.

For example, $m$ (the original covariance matrix) might be 100 by 100, $f$ could be 100 by 3 matrix and $d$ is a 100 dimensional vector.

Is there a fast way to do it? Analytical solution would be great. An iterative procedure is also good.

I need to add, that I minimise mean squared deviation between the elements of original covariance matrix and elements of its approximate representation (given above).

1

There are 1 best solutions below

2
On

Generally speaking you can't do this with arbitrary d. To prove this it's enough to see that $f^T f + diag(d^2)$ is a symmetric positive definite matrix, while your covariance matrix can have $0$ determinant.

You might want to checkout the Cholesky decompostion. If you take away the $d$ vector and just focus on $f$, you end up with $m \approx f^T f$. You can calculate one such $f$ matrix for non-singular covariance matrices numerically without using any approximations.