Are there many ways to decompose a matrix into a sum of rank-$1$ matrices?

3k Views Asked by At

Are there many ways to decompose a matrix, ${\bf{A}} \in \mathbb{R}^{n \times n}$, into a sum of rank-$1$ matrices?

I ask because I know that if a matrix is symmetric and diagonalizable, $\bf{A}=\Phi\Lambda\Phi^{T}$, then you can use the eigendecomposition to form a sum of rank-$1$ matrices:

$$\bf{A} = \sum_{i=1}^{n}\lambda_i \phi_i\phi_i^T\,.$$

It is also true that you can always use the singular value decomposition (SVD) to do this, as well:

$$\bf{A} = \sum_{i=1}^n \sigma_i \bf{u}_i\bf{v}_i^T\,.$$

So, unless these happen to be equal, which is not true in general, then I have found two ways already. Are there infinitely many ways? What is an example of another decomposition of this form?

2

There are 2 best solutions below

0
On BEST ANSWER

Yes, there exists an infinity of such decompositions as a sum of rank-1 matrices.

It suffices to use any factorisation of $A$ as a product of 2 matrices:

$$A=MN=\sum_{i=1}^n C_i L_i \tag{1}$$

where $C_i$ is the $i$th column of matrix $M$ and $L_i$ is the $i$th line of matrix $N$

Why are there an infinity of ways to write (1) ? It suffices to take any invertible matrix $M$, and to associate it $N:=M^{-1}A$.

Remark : in order to understand (1), let us consider the $2 \times 2$ case :

$$\begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix}e&f\\g&h\end{pmatrix}=\begin{pmatrix}a\\ c\end{pmatrix}\begin{pmatrix}e&f\end{pmatrix}+\begin{pmatrix}b\\d\end{pmatrix}\begin{pmatrix}g&h\end{pmatrix}$$

0
On

Say you want to decompose $A$ as a sum of the form $A=\sum_k \mathbf u_k\mathbf v_k^\dagger$ for some sets of vectors $\{\mathbf u_k\}$ and $\{\mathbf v_k\}$. We do not require $A$ to be squared, nor the vectors to be orthogonal or normalised. In matrix notation, such a decomposition corresponds to $A=UV^\dagger$, with the matrices $U,V$ defined as $U\equiv \sum_k \mathbf u_k \mathbf e_k^\dagger$ and $V\equiv \sum_k \mathbf v_k \mathbf e_k^\dagger$.

A first way to interpret the question is: given $A$ and $V$, is there $U$ such that the identity holds? In fact, let us simplify the notation even further and ask, given $A,C$, is there $B$ such that $A=BC$? This is easy to answer because it amounts to a linear system of equations. A solution for $B$ exists iff $\operatorname{supp}(C)\supseteq\operatorname{supp}(A)$, where $\operatorname{supp}(A)$ is the support of $A$, that is, the set of $\mathbf v$ such that $A\mathbf v\neq0$. In other words, a solution exists iff $C\mathbf v=0$ implies $A\mathbf v=0$.

The reason is simple:

  1. if $\operatorname{supp}(C)\subset \operatorname{supp}(A)$, then there's $\mathbf v\in V$ such that $C\mathbf v=0$ but $A \mathbf v\neq0$, and then it's impossible to find $B$ such that $A\mathbf v=BC\mathbf v$.
  2. otherwise, there is always a solution, given by $B=AC^+$ with $C^+$ the pseudoinverse. This works because $BC=AC^+C$ and $C^+C$ is the projection onto the support of $C$. So $A=BC$ holds for all $\mathbf v\in\operatorname{supp}(C)$. Furthermore, if $\mathbf v\notin\operatorname{supp}(C)$, then by hypothesis $A\mathbf v=C\mathbf v=0$, and therefore the solution $B=AC^+$ is correct on all input vectors.

It's worth noting that the solution is not unique, for the exact same reasons the solution to a linear system is not in general unique. Here, you can think of $B=AC^+$ as the inhomogeneous solution, and the general solution is $B=AC^+ +\tilde B$ for any $\tilde B$ such that $\tilde BC=0$.


Another way to interpret the question is to wonder about the possible decompositions that are "eigendecomposition-like". In other words, how many ways there are to write $A=\sum_k \lambda_k \mathbf u_k \mathbf u_k^\dagger$.

There's a rather explicit answer to this question in the case $A\ge0$, asking for decompositions of the form $A=\sum_k \mathbf x_k\mathbf x_k^\dagger$. These amount to writing $A=XX^\dagger$ for some (not necessarily squared) $X=\sum_k \mathbf x_k\mathbf e_k^\dagger$.

To see it, notice that the eigendecomposition of $A$ reads $A=UDU^\dagger$ with $U$ unitary and $D\ge0$ real diagonal. Write the SVD of $X$ as $X=V\Lambda W^\dagger$ with $V,W$ isometries and $\Lambda>0$ real diagonal. If $A=XX^\dagger=V\Lambda^2 V^\dagger$ then $\Lambda^2=D$, and $V$ is a matrix whose columns are the eigenvectors of $A$. In general, if $UDU^\dagger=VDV^\dagger$ with unitary $V$, then there must be $W$ such that $U=VW$ with $W$ unitary such that $[W,D]=0$.

In conclusion, a given $A=UDU^\dagger\ge0$ decomposes as $A=XX^\dagger$ iff $X=V\Lambda W^\dagger$ with $V=U\tilde U$ for some unitary $\tilde U$ such that $[\tilde U,D]=0$, and $W$ any isometry. Exploiting the commutation properties, this can be made even more concise realising that $\tilde U W^\dagger$ is again just the Hermitian conjugate of some isometry, and thus all decompositions $A=XX^\dagger$ have the form $X=U\sqrt D W^\dagger$ for some isometry $W$. To get an ever more expressive condition, observe that $\sqrt A=U\sqrt DU^\dagger$, and thus $A=XX^\dagger$ iff there's an isometry $W$ such that $$X= \sqrt A W^\dagger.$$