I was going over the following problem :
(a) Let $T$ be a linear operator on a finite dimensional vector space $V$, such that $T^2=I$. Prove that for any $v \in V$, $v-Tv$ is either an eigenvector with eigenvalue $-1$ or the zero vector. Prove that $V$ is the direct sum of $V^{1}$ and $V^{-1}$.
This is easy to show for $$v=\frac{v+Tv}{2}+\frac{v-Tv}{2}$$
Now the other question is to generalize this method to
(b) Prove that a linear operator $T$ such that $T^4=I$ decomposes a complex vector space into a sum of four eigenspaces.
This is also fairly okayish to show for
$$v=\frac{v+Tv+T^{2}v+T^{3}v}{4}+\frac{v-Tv+T^{2}v-T^{3}v}{4}+\frac{v-iTv-T^{2}v+iT^{3}v}{4}+\frac{v+iTv-T^{2}v-iT^{3}v}{4}$$
where the entities correspond to eigen values $1$, $-1$ , $i$ and $-i$ respectively.
Then I came across this problem which says that
Suppose $A$ is a $3 \times 3$ complex matrix such that $A^3=-I$ .Show that $A$ is diagonalizable.
I understand that I can solve this problem using minimal polynomial and I was able to do it. But then I tried this way. For any $v \in \mathbb{C}$ we have $$v=\frac{v-Av+A^{2}v}{3}+\frac{v-w^2Av+wA^{2}v}{3}+\frac{v-wAv+w^2A^{2}v}{3}$$ where $w^3=1$ and each entity corresponds to the eigen values $-1$, $-w$,$-w^2$ respectively.
If I can show that $v$ can be uniquely written as such a sum, Can I conclude that $A$ is diagonalizable?? It is not necessary that all the eigen values will occur, meaning $-1$ may repeat twice and $-w$ once or something else. Does the decomposition take these cases into account somehow?? I guess not.
Thanks for the help.!!
If $V$ is a finite dimensional vector space and $T \colon V \rightarrow V$ is linear, and you show that $V = V^{\lambda_1} \oplus \cdots \oplus V^{\lambda_k}$ where each $V^{\lambda_i}$ is an eigenspace of $T$ corresponding to eigenvalue $\lambda_i$, then this shows that $T$ is diagonalizable (because you can take a basis for each $V_i$ and combine them to form a basis of $V$ - it will be a basis of $V$ consisting of eigenvectors of $T$ showing that $T$ is diagonalizable).
This argument works even if some of the $V^{\lambda_i}$-s are trivial ($V^{\lambda_i} = \{ 0_V \}$), in which case $\lambda_i$ won't be an eigenvalue of $T$. For example, for $T^2 = \mathrm{id}$, you can show that $V = V^1 \oplus V^{-1}$ and this shows that $T$ is diagonalizable, but it doesn't neccesarily say that both $1$ and $-1$ are eigenvalues of $T$. Indeed, if $T = \mathrm{id}$ (or $T = -\mathrm{id}$) then $T^2 = \mathrm{id}$ but the only eigenvalue of $T$ is $\lambda = 1$ (or $\lambda = -1$).
In fact, if you show that $V = V^{\lambda_1} \oplus \cdots V^{\lambda_k}$, then you can say that $T$ is diagonalizable, with eigenvalues $\lambda_i$, each of geometric multiplicity $\dim V^{\lambda_i}$ where you tacitly assume that if you say that $\lambda$ is an eigenvalue of $T$ of geometric multiplicity zero then it means that $\lambda$ isn't an eigenvalue at all.