Prove that if $v$ is an eigenvector for A, than $v$ is also and eigenvector for $adj(A)$.

1k Views Asked by At

Let $A$ be a complex square matrix and $\operatorname{adj}(A)$ its adjugate (so the entries of $\operatorname{adj}A$ are minors of $A$, up to sign). Prove that if $v$ is an eigenvector for $A$, than $v$ is also an eigenvector for $\operatorname{adj}(A)$.

4

There are 4 best solutions below

0
On

An argument that works over any commutative ring:

It is well-known that $\operatorname{adj} A$ can be written as a polynomial in $A$. In other words, there exists a polynomial $p$ (in one variable) over the ground ring such that $\operatorname{adj} A = p\left(A\right)$. (This $p$ can be described explicitly: We have \begin{align} p\left(t\right) = \left(-1\right)^{n-1} \sum_{i=0}^{n-1} c_{n-1-i} t^i , \end{align} where $c_j$ denotes the coefficient of $t^{n-j}$ in the characteristic polynomial $\det\left(tI_n-A\right)$ of $A$. See Theorem 5.14 in my The trace Cayley-Hamilton theorem for a proof. Alternatively, you can find proofs at https://mathoverflow.net/questions/32133/expressing-adja-as-a-polynomial-in-a .)

Now, let $v$ be an eigenvector of $A$, and let $\lambda$ be the corresponding eigenvalue. Thus, $A v = \lambda v$. Consider the polynomial $p$ defined above, which satisfies $\operatorname{adj} A = p\left(A\right)$. From $A v = \lambda v$, we can easily obtain (by induction) that $A^i v = \lambda^i v$ for each nonnegative integer $i$. Hence, $p\left(A\right) v = p\left(\lambda\right) v$ (since the polynomial $p\left(t\right)$ is a linear combination of powers $t^i$). Thus, $v$ is an eigenvector of the matrix $p\left(A\right)$. In other words, $v$ is an eigenvector of the matrix $\operatorname{adj} A$ (since $\operatorname{adj} A = p\left(A\right)$).

3
On

The easiest (and the most natural) solution is to note that $\operatorname{adj}(A)$ is a polynomial in $A$ (as indicated in darij grinberg's answer). For a more elementary approach, you may consider the rank of $A$. Suppose $A$ is $n\times n$. There are three possibilities:

  1. $\operatorname{rank}(A)=n$. This case is easy, as $\operatorname{adj}(A)=\det(A)A^{-1}$.
  2. $\operatorname{rank}(A)<n-1$. This is case is also easy, as $\operatorname{adj}(A)=0$.
  3. $\operatorname{rank}(A)=n-1$. Then the left or right null spaces of $A$ are one-dimensional and $\operatorname{adj}(A)=uv^T$ for some eigenvector $u$ in the right null space of $A$ and some eigenvector $v$ in the left null space of $A$. Now, suppose $x$ is an eigenvector of $A$.
    • If $Ax=0$, since the null space of $A$ is one-dimensional, we must have $x=cu$ for some scalar $c$. Thus $\operatorname{adj}(A)x=uv^T(cu)=(v^Tu)x$.
    • If $x$ is an eigenvector of $A$ corresponding to some nonzero eigenvalue $\lambda$, then $0=(v^TA)x=v^T(Ax)=\lambda v^Tx$. Hence $v^Tx=0$ and $\operatorname{adj}(A)x=uv^Tx=0$.
0
On

This is an interesting question, in fact at first I thought it wasn't true when the eigenvalue is $0$, but it turns out that it is !

We have the well-known formula $adj(A)A = \det(A)I_n$. From this it follows at once that if $v$ is an eigenvector of $A$ with nonzero eigenvalue $\lambda$, $v$ is an eigenvector of $adj(A)$ with eigenvalue $\det(A)/\lambda$.

If $Av =0$ however, this does not work so we have to use a trick : perturb $A$ by $\epsilon>0$ to get $A_\epsilon = A +\epsilon I_n$. Then $v$ is an eigenvector of $A_\epsilon$ with eigenvalue $\epsilon$ which is nonzero. It follows that $adj(A_\epsilon)v = \frac{\chi_A(-\epsilon)}{\epsilon}$.

Note that since $A$ has $0$ as an eigenvalue, its determinant is $0$ so $\chi_A(0) =0$. Therefore, letting $\epsilon \to 0$ in the previous equality gives the derivative of $\chi_A$ : $adj$ is continuous so that it yields $adj(A)v = -\chi_A'(0)v$

(We even have an explicit formula for the eigenvalue !)

One might wonder whether this generalizes to other fields, since I used the topology of $\mathbb C$ (or $\mathbb R$) to let $\epsilon$ tend to $0$; the only problem is that you have to be careful in the last step : you want to first work in $K(X)$ if $K$ is your base field, then notice that the fraction $\det(A+XI_n)/X$ is actually a polynomial (because $\det(A)=0$), so we may work in $K[X]$, and then let $X=0$.

(Morally you can work in the ring if dual numbers $K[\epsilon]$ but the technicalities seem more annoying to write)

2
On

It is because if A, B are similar square matrices, adj(A) and adj(B) are also similar (§). If v is an eigenvector of A, then you pick a basis e_1...,e_n with v=e_1, the result will be obvious in that basis.

In order to prove (§), define K:=the field of fractions of Z[X_1,...,X_{2n^2}] you may consider for every nxn invertible matrices F,G with coefficients in K, the identity (GFG^{-1})^{-1} = 1/det(F) adj(GFG^{-1}) = 1/det(F)G*adj(F)G^{-1}=GF^{-1}G^{61} which yields to adj(GF*adj(G)) = det(G)^{n-2}Gadj(F)*adj(G). This holds for every invertibles F,G and so it holds for any F,G with coefficients in Z[X_1,...,X_{2n^2}] and so this polynomial identity holds in any commutative ring. The result can be deduced from this (by dividing again by the determinant of the base change matrix)