What happens to the singular value of a multiplication of matrices after one row in one matrix is zeroed

141 Views Asked by At

Let $M=A_nA_{n-1}...A_2A_1$ be an $m\times m$ matrix in $\mathbb{R}^{m\times m}$. One can perform the SVD decomposition of $M$ and find its singular values $\sigma_1,\sigma_2,...,\sigma_m$.

My question is: let's say we take $A_j$ and zero its $i$th row, then can we say anything about the singular values of the new $M$, in particular about the maximum and minimum singular values? More generally, if we randomly zero multiple rows of a matrix (or multiple matrices), what can be said about the singular values of the overall transformation?

If a row of $A_n$ is zeroed, then the corresponding singular value to that row is going to be $0$. For an intermediate matrix, one could zero all rows but one, and the entire transformation could still have nonzero singular values.

I don't really know how to approach this problem, other than writing out everything, but that is just going to be a huge mess and won't probably be insightful. I'd appreciate any help :)

1

There are 1 best solutions below

5
On BEST ANSWER

Unless $n=1$, all we can say is that if we zero out some $k$ rows of $A_j$, the new $M$ must have at least $k$ zero singular values. There isn't any special relationship between the nonzero singular values of the old $M$ and the new $M$. In particular, one should not expect that the largest singular value of the old $M$ is always greater than or equal to the smallest nonzero singular value of the new $M$. E.g., suppose $u,v,w,\widetilde{w},x$ are five nonzero vectors such that $w=(w_1,w_2,\ldots,w_m)^T,\,\widetilde{w}=(0,w_2,\ldots,w_m)^T,\,v^Tw=0$ and $v_1w_1\ne0$. Then $v^T\tilde{w}$ is necessarily nonzero. Therefore, if $A=uv^T,\,B=wx^T$ and $C=\widetilde{w}x^T$, we have $AB=0\ne AC$. Hence the smallest nonzero singular value of $AC$ (which exists because $AC\ne0$) is greater than the largest singular value of $AB$ (which is zero in this example).

However, when $n=1$, i.e., when $M=A_1$, we do have $\sigma_i(M_\text{old})\ge\sigma_i(M_\text{new})$ for each $i$. This is because $M_\text{old}M_\text{old}^T-M_\text{new}M_\text{new}^T$ is positive semidefinite.

Edit. In contrast, when $n>1$, quite the opposite can be true for a “normal-looking” nonzero $M_\text{old}$. That is, it can happen that $\sigma_i(M_\text{old})<\sigma_i(M_\text{new})$ for every nonzero singular value of $M_\text{new}$. Here is a random example: $$ A=\pmatrix{1&1&0\\ 0&1&1\\ -1&1&-1}, \ B=\pmatrix{1&-1&0\\ 1&1&-1\\ -1&1&1}, \ C=\pmatrix{0&0&0\\ 1&1&-1\\ -1&1&1}, $$ One may verify that the singular values of $M_\text{old}=AB$ are $3.1503,\,2.0531$ and $0.9277$, while the singular values of $M_\text{new}=AC$ are $3.2206,\,2.1512$ and $0$.