Prove that an $n \times n$ matrix $A$ with entries from $\mathbb{C}$ satisfying $A^3 = A$ can be diagonalized.
There's a solution, which can be found here (page 4, question 12.3.22). I am having trouble understanding some of the reasoning.
SOLUTION: If $A^3=A$, then the minimal polynomial of $A$ divides $x^3-x= x(x-1)(x+1).$ Hence, the only possible eigenvalues for $A$ are $0$ and $\pm 1$. Suppose $C^{-1}AC$ is in Jordan form; Then $C^{-1}A^3C$ is also in Jordan form. Suppose we have an eigenvalue $\lambda = 0,\pm 1,$ and $J$ is a Jordan block of eigenvalue $\lambda$. Then $J^3=J$ if and only if J is of size 1. Hence all Jordan blocks of $A$ have size 1, so $A$ is diagonalizable.
- How do we know the minimal polynomial divides $p(x)=x^3 - x$?
Is it because the minimal polynomial, say $\min(A)$, is the unique monic polynomial of smallest degree such that $\min(A)=0$? We know that $$p(A) = A^3-A = 0$$ So this could possibly mean that $\min(A) = p$. But we need more information to conclude that so the most we can is that $\min(A)$ divides $p$. Correct?
- How do we know that the Jordan block should satisfy $J^3 = J$?
I will change notation a bit. Let $J_{\lambda}$ denote Jordan blocks. I believe the author is saying that $J_{\lambda}^3= J_{\lambda}$. I have no idea why this is being considered.
If we let $J$ denote the Jordan form of $A$, then $$J=C^{-1}AC = C^{-1}A^3C = J^3.$$ From this, I believe it follows that the Jordan blocks have size 1.
It is not too taxing to do this without using the theorem stating that a matrix with a square-free minimal polynomial is diagonalizable.
Let $x\in \Bbb{C}^n$ be an arbitrary vector. The relation $A^3=A$ then says the vector $$ x_1=(A^2-I)x $$ satisfies $$ Ax_1=A(A^2-I)x=(A^3-A)x=0. $$ Similarly we see that the vector $$ x_2=(A^2-A)x $$ satisfies $$ Ax_2=A^3x-A^2x=Ax-A^2x=-x_2. $$ Also the vector $$ x_3=(A^2+A)x $$ satisfies the relation $$ Ax_3=A^3x+A^2x=Ax+A^2x=x_3. $$ To summarize, if any of $x_1$, $x_2$, $x_3$, is a non-zero vector, it is an eigenvector of $A$ belonging to the respective eigenvalue $\lambda_1=0$, $\lambda_2=-1$, $\lambda_3=1$.
But the linear combination $$ -2x_1+x_2+x_3=[-2(A^2-I)+(A^2-A)+(A^2+A)]x=2Ix=2x. $$ Therefore any vector $x$ can be written as a linear combination $$ x=-x_1+\frac12 x_2+\frac12x_3 $$ of eigenvectors of $A$. This is another condition for diagonalizibility of $A$.
IIRC the implication the sum of the eigenspaces is the whole space $\implies$ diagonalizable is encountered a lot earlier than the (more general) implication about the the minimal polynomial being square-free used in the other answers.
It may be an illuminating to check how the above argument can be constructed as an adaptation of the proof of that more general result to the case where the minimal polynomial is $x^3-x$ in particular. Undoubtedly you spotted that I used the maximal proper factors of the minimal polynomial in constructing the vectors $x_i, i=1,2,3$. This direct "proof of a special case" is often used, when the minimal polynomial is known to be a factor of $x^k-1$. Then the decomposition into eigenvectors becomes (essentially) discrete Fourier analysis.