How to derive the linearized form of the PDE the "singular values are constant"?

43 Views Asked by At

$\newcommand{\Cof}{\operatorname{Cof}}$

Let $0<\sigma_1<\sigma_2<\cdots<\sigma_n$ and set $A=\operatorname{diag}(\sigma_1,\dots,\sigma_n)$.

Claim: Let $B $ be a real-valued $n \times n$ matrix, and suppose that the singular values of $A+tB$ are constant for sufficiently small $t \ge 0$. Then $B_{ii}=0$.

How to prove this claim?

I am in fact interested in an 'infinitesimal' version of it, i.e. I think that the conclusion $B_{ii}=0$ should already be implied by equating the derivative of the implicit PDE $\,\,\sigma_i(A+tB)=0\,$ to zero at $t=0$.

Here is a proof for the case where $n=2$:

Write $A_t=A+tb$. By assumption, $$\det A_t=\sigma_1 \sigma_2, \tag{1}$$ $$\|A_t\|^2=\sigma_1^2+\sigma_2^2 \tag{2}$$ are independent of $t$.


Differentiating equation $(1),(2)$ at $t=0$ we get $ \langle \Cof A,B \rangle = \langle A,B \rangle=0, $ where $\Cof A=\begin{pmatrix} \sigma_2 & 0 \\\ 0 & \sigma_1 \end{pmatrix}$ is the cofactor matrix of $A$.

Writing this explicitly, we obtain $ \sigma_1 B_{11} +\sigma_2 B_{22} = \sigma_2 B_{11} +\sigma_1 B_{22}=0. $ This implies $(\sigma_1-\sigma_2)(B_{11} -B_{22} )=0$. Since we assumed $\sigma_1<\sigma_2$ this forces $B_{11} =B_{22} $ which then immediately implies they are both zero.


The question is how to prove this for general $n \ge 3$. When $n=2$ I exploited the fact that the determinant and the norm together determine the singular values, so contain all the relevant information. This is not so in higher dimensions.