Show $A$ is diagonalizable if $1$ is an eigenvalue and $A$ has non-zero rank.

624 Views Asked by At

I know that $A$ is an $n\times n$ real matrix of rank $r>0$ and that $1$ is an eigenvalue of $A$. I also know that the geometric multiplicity of $1$ is equal to $r$ (the rank of $A$). I am asked to show that $A$ is diagonalizable.

I have been able to use what I know about eigenvalues (and eigenspaces) in addition to the Rank-Nullity Theorem, in order to show that we must have $r=n$. That is, $A$ must be full rank and is therefore invertible. This tells us that $0$ is not an eigenvalue of $A$. I am thinking that from here I should be able to show that $A$ is similar to a diagonal matrix (maybe with the eigenvalue of $A$ along its main diagonal). But I am struggling to make that leap. Any guidance is appreciated.

4

There are 4 best solutions below

7
On

As said in the comments, $A$ doesn't have to be invertible - a matrix $A$ of size $n\times n$ is diagonalizable if and only if it has $n$ linearly independent eigenvectors.

The geometric multiplicity of $1$ is $r$, so $A$ has $r$ linearly independent eigenvectors with eigenvalue $1$. We also know that $r$ is the rank of $A$, so from the rank-nullity theorem $\dim(\ker(A)) = n - \text{rank}(A) = n-r$.

This gives us an additional $n-r$ independent eigenvectors with eigenvalue $0$, and $n$ linearly independent eigenvectors overall, as needed.

Note that we did not use the fact that the eigenvalue is $1$, it could be any number. The more general statement is that if $\forall \lambda$ eigenvalue of $A$, the geometric multiplicity of $\lambda$ is equal to its algebraic multiplicity, then $A$ is diagonizable.

2
On

One further observation is that if it were true that $r = n,$ we could conclude that $\chi(x) = (x - 1)^n,$ as the algebraic multiplicity of an eigenvalue must be at least as large as its geometric multiplicity.

Claim. We must have that the minimal polynomial of $A$ is $\mu(x) = x - 1$ so that $A = I.$

Proof. Certainly, we have that $\mu(x) = (x - 1)^k$ for some integer $1 \leq k \leq n.$ On the contrary, let us assume that $k \geq 2.$ Considering that $\mu(x)$ is the largest invariant factor of $A,$ we have that $(x - 1)^k$ is an elementary divisor of $A$ so that the Jordan Canonical Form of $A$ is not diagonal. Particularly, one of the Jordan blocks is the $k \times k$ matrix with $1$s on the diagonal and the superdiagonal. But this contradicts the assumption that $A$ is diagonalizable, hence we conclude that $\mu(x) = x - 1.$ QED.

Edit: Unfortunately, the claim of the title is not true. Consider the invertible matrix $$A = \begin{pmatrix} 1 & 1 & 2 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{pmatrix}.$$ One can prove that $\chi(x) = \mu(x) = (x - 1)^3,$ hence its Jordan Canonical Form is $$\operatorname{JCF}(A) = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix},$$ so this matrix is not diagonalizable, but it has nonzero rank and an eigenvalue of $1.$

0
On

So, it is crucial that you understand that the question you ask in your title line is false. It cannot be proven.

Here is the simplest matrix that cannot be diagonalized: $$\begin{bmatrix}1&1\\0&1\end{bmatrix}.$$Obviously $\begin{bmatrix}1\\0\end{bmatrix}$ is an eigenvector with eigenvalue $1$. In addition the columns are obviously linearly independent, so this matrix has rank $2$. (What even is a matrix of zero rank? I suppose it is the zero matrix?)

So the crucial fact is the geometric multiplicity—which is defined as the dimension of the nullspace of $A - \lambda I.$ Above, you see that this matrix for eigenvalue 1 must be $$\begin{bmatrix}0&1\\0&0\end{bmatrix}$$and rather than having a nullspace dimension of 2 as one might have desired (the determinant is the product of the eigenvalues and is 1, the trace is the sum of the eigenvalues and is 2, so the eigenvalues are +1 and +1), instead the geometric multiplicity is 1, there is only one eigenvector with that eigenvalue and not two.

The fact that the geometric multiplicity is equal to the rank is therefore critical to the proof. If you have a proof which tries to water this assumption down then that proof fails.

The proof for $r=n$ is really really easy: the dimension of the nullspace of $A - I$ is $n$, so the whole space is null, therefore $A - I = 0$, therefore $A = I.$

The hard part is the case $r < n.$ Roughly speaking the intuition is this: due to the rank-nullity theorem those extra dimensions $n - r$ are due to the nullspace ker(A) and they are all therefore corresponding to eigenvalue 0, while the remaining dimensions have eigenvalue 1. So the simplest nontrivial examples would be e.g. $$\begin{bmatrix}1&0\\0&0\end{bmatrix}, ~~\begin{bmatrix}1&1\\0&0\end{bmatrix},~~ \begin{bmatrix}1&1\\1&1\end{bmatrix},~~\begin{bmatrix}1&0&1\\0&1&1\\0&0&0\end{bmatrix}.$$

How exactly you prove this depends on what facts you have at your disposal. You have that the geometric multiplicity of all of the eigenvalues together is $r + (n - r) = n$ and this may be sufficient, or you may need to introduce a set of incremental steps to show that the vectors which span one space and the vectors which span the other space must be linearly independent from each other or so.

0
On

That 1 has geometric multiplicity $r$ means that the null space $Z_1={\rm ker} \ (A-I)$ has dimentsion $r$.

That the rank of $A$ is $r$ shows that $Z_0 = {\rm ker} \ A$ has dimension $n-r$. Let $e_1,...,e_r$ be a basis for $Z_1$ and $e_{r+1},...,e_n$ a basis for $Z_0$. Then $A e_j=e_j$ for $1\leq j\leq r$ and $A e_k=0 \times e_k=e_k$ for $r+1\leq k \leq n$. We have diagonalized $A$.