Can all matrices be decomposed as product of right and left eigenvector?

981 Views Asked by At

Given $N\times N$ matrix $A$,I see a formula in literature without saying other requirements

$$A= \sum_{i}\lambda_i R^T_i L_i $$ where $\lambda_i$ are eigenvalue and $R_i$ and $L_i$ are right and left eigenvector with eigenvalue of $\lambda_i$.

I'm confused about this formula. First I didn't see this formula in my linear algebra course. Second there are so much ambiguities in this formula: What's about $A$ that is non-diagonalizable, i.e. eigenvector's number is less than the eigenvalues' number? What's about there are degenercy in some $\lambda_i$ so how to pair $R_i$ and $L_i$?

So where can I find the complete statement of this formula? What's the name of this decomposion?

2

There are 2 best solutions below

0
On

As stated, the formula makes no sense (at least for an arbitrary matrix). But it heavily resembles the Singular Value Decomposition.

There is another oddity in the formula you wrote, in that it seems to write vectors as horizontal matrices; this is really odd because then it forces you to write the usual "matrix times vector" as $Ax^T$.

The Singular Value Decomposition is $A=UDV$, where $U,V$ are unitaries (orthogonal if your matrix is real) and $D$ is diagonal with the singular values in the diagonal. If $e_1,\ldots,e_n$ is the canonical basis, we can write $$ D=\sum_{j=1}^n\sigma_j\,e_je_j^T. $$ Thus $$ A=\sum_{j=1}^n\sigma_j\,Ue_j\, (Ve_j)^T=\sum_{j=1}^n\sigma_j\, x_jy_j^T, $$ where $x_1,\ldots,x_n$ and $y_1,\ldots,y_n$ are two orthonormal bases.

0
On

The quoted formula is actually equivalent to the usual eigenvalue decomposition of a matrix $\mathbf{A}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}$, so it only works for diagonalisable matrices. Let me explain this below.

As pointed out by Martin Argerami, the formula uses a weird convention where both $\mathbf{L}_i$ and $\mathbf{R}_i$ are row vectors; for convenience, let me change $\mathbf{R}_i$ into column vectors. Define the matrices $\mathbf{X}_L$ whose rows comprise the left eigenvectors, and $\mathbf{X}_R$ whose columns comprise the right eigenvectors: $$\mathbf{X}_L = \begin{pmatrix} \mathbf{L}_1 \\ \mathbf{L}_2 \\ \vdots \end{pmatrix}, \quad \mathbf{X}_R = \begin{pmatrix} \mathbf{R}_1 & \mathbf{R}_2 ... \end{pmatrix}. $$ I now argue that the quoted formula is equivalent to $\mathbf{A} = \mathbf{X}_R \mathbf{\Lambda} \mathbf{X}_L$. To see this, write out the matrix element $$\begin{aligned} A_{ij} &= \sum_{kl} (X_R)_{ik} \Lambda_{kl} (X_L)_{lj}\\ &= \sum_{kl} (R_k)_i \lambda_k \delta_{kl} (L_l)_j\\ &= \sum_k (R_k)_i \lambda_k (L_k)_j, \end{aligned}$$ where in the second line, I used the definition of $\mathbf{\Lambda}$, $\mathbf{X}_L$ and $\mathbf{X}_R$. The last line is just the matrix elements obtained from the quoted formula.

Having shown that the formula is equivalent to $\mathbf{A} = \mathbf{X}_R \mathbf{\Lambda} \mathbf{X}_L$, I now use the known result that the left and right eigenvectors are biorthogonal, so we can always normalise them such that $\mathbf{X}_L \mathbf{X}_R = \mathbf{I}$. With this choice of normalisation, we have $\mathbf{X}_L = \mathbf{X}_R^{-1}$ since for finite dimensions, $\mathbf{A}\mathbf{B}=\mathbf{I} \implies \mathbf{B}\mathbf{A}=\mathbf{I}$. Thus, the quoted formula is equivalent to $\mathbf{A} = \mathbf{X}_R \mathbf{\Lambda} \mathbf{X}_R^{-1}$. Hence, we recover the usual spectral decomposition with $\mathbf{Q} = \mathbf{X}_R$.

With regards to the other parts of your question:

  • The formula doesn't work for non-diagonalisable matrices.
  • Even when there is degeneracy, you can still choose the basis in the degenerate subspace for both left and right eigenvectors that preserves the biorthogonality. Use that basis when pairing $\mathbf{L}_i$ and $\mathbf{R}_i$.