Show matrix is diagonalizable

355 Views Asked by At

Let $A$ be an $n \times n$ matrix such that $A^2 = I$ and $A \neq I$. Show that $A$ is diagonalizable.

So far I tried multiplying both sides of the first equation by $P$ and by $P^{-1}$, but I don't know how to proceed from there.

3

There are 3 best solutions below

2
On

Let $P = X^2 - 1$, then $P(A) = 0$ and as $P$ only has simple roots, $A$ is diagonalizable.

2
On

If $A^2=I$ $\Longrightarrow$ $A^2-I=0$. This means that your matrix satisfies the polynomial $Q(t)=t^2-1$. If we consider the minimal polynomial $\mu(t)$ we know that it divides $t^2-1$ because it is the monic generator of the ideal of polynomials satisfied by the matrix. So the minimal polynomial has always simple roots and this implies that your matrix is diagonalizable. Rmk: every root of the minimal polynomial $\mu(t)$ is an eigenvalue.

6
On

If your field's characteristic is not 2, then consider using basic tools of vector spaces (scaling and translating)

$B := 2^{-1}(A+I)$
and $B^2 = 2^{-2}(A^2 + 2A + I) = 2^{-2}(I + 2A +I) = 2^{-2}\cdot 2 (A + I) = B $

so $B$ is idempotent. Idempotent matrices are always diagonalizable, which implies $A$ is diagonalizable. The standard argument for why uses a simple form of minimal polynomial argument -- I give it below.
(This approach makes it clear why things may break in a field of characteristic 2 -- i.e. because $2^{-1}$ doesn't exist). I prefer leaning on idempotence since it is in some ways more important than involutions.

simple argument for diagonalizability of idempotent matrix
(assume $I \neq B \neq \mathbf 0$ as there is nothing to do in those cases.).

$B^2 = B \longrightarrow B^2 - B = (B-I)(B-\mathbf 0) = \mathbf 0$
consider $\dim \ker B = r$ then linear independent set $\big\{\mathbf v_1, ... ,\mathbf v_r\}$ in the kernel of $B$ and another linearly independent set $\{\mathbf v_{r+1},..., \mathbf v_n\}$ that generates the image of $B$ (rank-nullity)

$\mathbf v_j \in \{\mathbf v_{r+1},..., \mathbf v_n\}\longrightarrow \big(B-I\big)\mathbf v_j = \mathbf 0$
so $\mathbf v_j$ is an eigenvector of $B$ with eigenvalue 1 which implies linear independence from those eigenvectors with eigenvalue 0 so
$\big\{\mathbf v_1, ... ,\mathbf v_r, \mathbf v_{r+1},..., \mathbf v_n\} $ is a linearly independent set which forms a basis for our n-dimensional vector space, i.e. $B$ has eigenvectors associated with eigenvalue 1 and 0 that form a basis so $B$ is diagonalizable.


note: if OP is uncomfortable with minimal polynomial arguments and is working in $\mathbb R$ or $\mathbb C$, we could also prove idempotent matrices are diagonalizable by observing all eigenvalues must be 0 or 1, then writing out the Jordan blocks. $P^{-1}B^kP = J^k = J$ thus $\big \Vert P^{-1}B^kP\big \Vert_F = \big \Vert J^k\big \Vert_F = \big \Vert J\big \Vert_F $ which implies the super diagonal associated with eigenvalue 1 is all zero (i.e. if this wasn't the case then we'd have the crude lower bound $k\leq \big \Vert J^k\big \Vert_F$ which implies $\big \Vert J^k\big \Vert_F \gt \big \Vert J\big \Vert_F$ for large enough $k$).

As for eigenvalue 0, just repeat the argument above on $(I-B)$ which is idempotent.