Eigenvalues of $AB$ and $BA$ where $A$ and $B$ are square matrices

11.4k Views Asked by At

Show that if $A,B \in M_{n \times n}(K)$, where $K=\mathbb{R}, \mathbb{C}$, then the matrices $AB$ and $BA$ have same eigenvalues.

I do that like this:

let $\lambda$ be the eigenvalue of $B$ and $v\neq 0$

$ABv=A\lambda v=\lambda Av=BAv$

the third equation is valid, because $Av$ is the eigenvector of $B$. Am I doing it right?

6

There are 6 best solutions below

1
On BEST ANSWER

It suffices to show that $AB$ and $BA$ have the same characteristic polynomial. First assume that $A$ is invertible then

$$\chi_{AB}(x)=\det(AB-xI)=\det A\det(B-xA^{-1})\\=\det(B-xA^{-1})\det A=\det(BA-xI)=\chi_{BA}(x)$$ Now since $\operatorname{GL}_n(K)$ is dense in $\operatorname{M}_n(K)$ then there's a sequence of invertible matrices $(A_n)$ convergent to $A$ and by the continuity of the $\det$ function we have $$\chi_{AB}(x)=\det(AB-xI)=\lim_{n\to\infty}\det(A_nB-xI)=\lim_{n\to\infty}\det(BA_n-xI)\\=\det(BA-xI)=\chi_{BA}(x).$$

1
On

If $A$ is invertible, user63181 already showed that

$$\det(AB-xI)=\det(BA-xI)$$

We now prove this equality in general.

Fix $x$

Let $$P_x(y):=\det[(A-yI)B-xI]-\det[B(A-yI)-xI] \,.$$

Then $P_x(y)$ is a polynomial of degree at most $n$ in $y$. Whenever $y$ is not an eigenvalue of $A$ the matrix $A-yI$ is invertible, thus by the first pat $P_x(y)=0$. Hence $P_x$ has infinitely many roots, and hence $P_x \equiv 0$.

This proves that $P_x(0)=0$ which is exactly what you need to prove.

1
On

Here is a more "algebraic" approach from the other answers by user63181 and N. S. which, as far as I can see generalizes to other fields (where the continuity argument might fail) although I thought the argument by continuity was cool!

First we use the following characterization of non-zero eigenvalues: $\lambda$ is an eigenvalue for $AB$ iff $I - \lambda AB$ is not invertible.

I claim the following: $I - \lambda AB$ is (not) invertible iff $I - \lambda BA$ is (not) invertible.

Proof: Suppose $I - \lambda AB$ is invertible, then let $U := 1 + \lambda B (I - \lambda AB)^{-1}A$. Now show that $U$ is an inverse to $I - \lambda BA$ (by multiplying out and using distributivity). The converse direction follows by letting $V := 1 + \lambda A(1-\lambda BA)^{-1}B$.

From this it follows that $AB$ and $BA$ have the same non-zero eigenvalues.

8
On

Here is a proof similar to what the OP has tried:

Let $\lambda$ be any eigenvalue of $AB$ with corresponding eigenvector $x$. Then

$$ABx = \lambda x \Rightarrow \\ BABx = B\lambda x \Rightarrow\\ BA(Bx) = \lambda (Bx) $$

which implies that $\lambda$ is an eigenvalue of $BA$ with a corresponding eigenvector $Bx$, provided $Bx$ is non-zero. If $Bx = 0$, then $ABx = 0$ implies that $\lambda = 0$.

Thus, $AB$ and $BA$ have the same non-zero eigenvalues.

3
On

Alternative proof #1:

If $n\times n$ matrices $X$ and $Y$ are such that $\mathrm{tr}(X^k)=\mathrm{tr}(Y^k)$ for $k=1,\ldots,n$, then $X$ and $Y$ have the same eigenvalues.

See, e.g., this question.

Using $\mathrm{tr}(UV)=\mathrm{tr}(VU)$, it is easy to see that $$ \mathrm{tr}[(AB)^k]=\mathrm{tr}(\underbrace{ABAB\cdots AB}_{\text{$k$-times}}) =\mathrm{tr}(\underbrace{BABA\cdots BA}_{\text{$k$-times}})=\mathrm{tr}[(BA)^k]. $$ Now use the above with $X=AB$ and $Y=BA$.

Alternative proof #2:

$$ \begin{bmatrix} I & A \\ 0 & I \end{bmatrix}^{-1} \color{red}{\begin{bmatrix} AB & 0 \\ B & 0 \end{bmatrix}} \begin{bmatrix} I & A \\ 0 & I \end{bmatrix} = \color{blue}{\begin{bmatrix} 0 & 0 \\ B & BA \end{bmatrix}}. $$ Since the $\color{blue}{\text{red matrix}}$ and the $\color{red}{\text{blue matrix}}$ are similar, they have the same eigenvalues. Since both are block triangular, their eigenvalues are the eigenvalues of the diagonal blocks.

0
On

Observe that there is some $v$ such that $ABv=\lambda v$ iff there are $v,w$ that solve the linear system: $$\begin{cases}Aw=\lambda v, \\Bv=w.\end{cases}\tag1$$ On the other hand, from (1), if $\lambda\neq0$, one also immediately gets $BAw=\lambda w$.

It follows that $ABv=\lambda v$ for some $\lambda\neq0$ iff $BAw=\lambda w$, where $w=Bv$ (and thus also $v=\frac1\lambda Aw$).

Example

For example, consider $$A\equiv \begin{pmatrix}1&0\\0&0\end{pmatrix}, \qquad B\equiv \frac12\begin{pmatrix}1&1\\1&1\end{pmatrix}.$$ Then $$ AB = \frac12\begin{pmatrix}1&1\\0&0\end{pmatrix}, \qquad BA = \frac12\begin{pmatrix}1&0\\1&0\end{pmatrix}. $$ Thus $AB e_1=\frac12 e_1$ and $AB(e_1-e_2)=0$, while $BA(e_1+e_2)=\frac12(e_1+e_2)$ and $BA e_2=0$. And we can notice how the nonzero eigenvectors of $AB$ are related as spelled out before: $Be_1= \frac12(e_1+e_2)$, and $A(e_1+e_2)=e_1$.

On the other hand, this same example shows how the result fails for zero eigenvalues. It remains true that both matrices have kers of the same dimensions, but the correponding eigenvectors are not related as in the nonzero eigenvalues case. Here the ker of $AB$ is spanned by $e_1-e_2$, but $B(e_1-e_2)=0$, which thus clearly does not give the ker of $BA$, which is spanned by $e_2$.

I'd also observe here that the reason we don't get the stated relation between eigenvectors of $AB$ and $BA$ is that we can have situations where $ABv=0$ because $Bv=0$ (which is also what happens in the example). If however we have $ABv=0$ but $Bv\neq0$, then we can see that we recover (at least partially) the result, because $(BA)(Bv)=0$.