Ensure random matrix to be positive semi definite and of fixed rank

392 Views Asked by At

Given a real random matrix $M$, I would like to build a real symmetric positive semi definite matrix of rank $k$.

To obtain a symmetric PSD matrix I can simply take $X = M + M^T$ and then $Y = X^T X$, assuming $M$ is full-rank.

How can I build a positive semi definite matrix of fixed rank $k$ from $M$ ?

Edit : I am only considering the cases $k=2$ and $k=3$.

1

There are 1 best solutions below

3
On BEST ANSWER

If you are willing to make an eigenvalue decomposition, you can optimize with respect to ${\bf M = TDT}^{-1}$. Pushing all but $k$ entries along the diagonal in $\bf D$ to $0$. It may not be so nice given the full eigenvalue decomposition can be computationally demanding.


A faster approach could be to use some power iteration method to find the $k$ largest modulus eigenvalues (and corresponding eigenvectors) of $\bf M$, then modifying the eigenvalues to make sure they are $\in \mathbb R$ and $\geq 0$, then construct ${\bf M = TDT}^{\dagger}$, Where $\bf T$ and $\bf D$ are the modified new sparse matrices of the eigenvalue decomposition and ${\bf T}^\dagger$ is a pseudo-inverse, for example Moore-Penrose's pseudo inverse.


A basic power iteration method described:

  1. Start with a random column vector ${\bf v}_0$ and the matrix $\bf M$ you want to find eigenvalues for. Calculate ${\bf v}_1 = {\bf Mv}_0$, set $k=1$
  2. For as long as ${\bf v}_k$ is not parallell enough to ${\bf v}_{k-1}$
    1. Calculate ${{\bf v}_{k+1} = \bf Mv}_k$
    2. If needed, renormalize ${\bf v}_{k+1}$, for example to avoid overflow or other numerical issues.
    3. Increase $k$ by one.
  3. When done the last calculated ${\bf v}_{k+1}$ is an estimate of the eigenvector and each scalar of the Schur/Hadamard (element-wise) division of ${\bf v}_{k+1}$ by ${\bf v}_k$ is an estimate to the largest modulus eigenvalue of $\bf M$.

Now you can factor out the eigenvalue and eigenvector you just found and continue finding the next one. One way to do this is described in Horn-Johnson's Matrix Analysis, but I am sure there exist many other ways to do it too.


Or if you think it is too inconvenient to factor, you can just make sure you project away the found eigenvector for the next randomized ${\bf v}_0$ because then you are sure that it will not grow any more in the direction of that eigenvector whenever you multiply with $\bf M$ from the left.