Be $T\in \mathscr{L}(V)$ a linear operator with characteristic polynomial $p_T(x)=(x-\lambda_1)^{n_1}\dots(x-\lambda_t)^{n_t}$, $n_i\geq 1$ and $\lambda_i\neq\lambda_j$ if $i\neq j$. Show that $T$ can be written as a direct sum of $t$ linear operators, i.e, there are vector subspaces $W_1,\dots,W_t\subset V$ such that $W_i\neq\{\vec{0}\}$ is $T$-invariant $\forall i \in \{1,\dots,t\}$, $V=W_1\oplus\dots\oplus W_t$ and we write $T=T_1\oplus\dots\oplus T_t$ where $$T_i:W_i\to W_i$$ $$w\mapsto T(w).$$
I see that if $T$ is diagonalizable we are done, since we can take $W_i=E(\lambda_i)$, the eingenspace related to $\lambda_i$. I know also that since the $p_T(x)$ has the form above, then we have the Jordan canonical form for $T$ and $W_i$ as required.
But, what really troubles me out is that, in the specific book I'm using, this exercise comes before all the Jordan form's theory, at the section of $T$-invariance, the very section after diagonalization. Furthermore, the book DOES the theorem of Jordan that guarantees the existence of "special" $W_i$'s that gives rise to its Jordan's canonical form, much after, in the Jordan's section.
All this makes me think that the required for the exercise above IS NOT those $W_i$ in Jordan's theorem, just like if there were a simpler example (albeit its matrix hasn't any specific form but that it's a matrix of blocks).
Does this example really exists?
Consider the ascending sequence of subspaces
$$ \ker(T - \lambda_1 \cdot \mathrm{id}) \subseteq \ker((T - \lambda_1 \cdot \mathrm{id})^2) \subseteq \ldots \subseteq V. $$
Since $V$ is finite dimensional, this sequence must stabilize so there exists a minimal $m_1 \in \mathbb{N}$ such that $\ker((T - \lambda_1 \cdot \mathrm{id})^{m_1+i}) = \ker((T - \lambda_1 \cdot \mathrm{id})^{m_1})$ for all $i \geq 0$. Define
$$ W_1 := \ker((T - \lambda_1 \cdot \mathrm{id})^{m_1}), \,\, V_1 := \mathrm{im}((T - \lambda_1 \cdot \mathrm{id})^{m_1}) $$
and show that $W_1 \cap V_1 = \{ 0 \}$ which implies, by the rank-nullity theorem, that $V = W_1 \oplus V_1$. Note that both $W_1$ and $V_1$ are $T$-invariant and thus, we have
$$ p_T(x) = p_{T|_{W_1}}(x) p_{T|_{V_1}}(x). $$
The vector space $W_1$ contains all the eigenvectors of $T$ associated to the eigenvalue $\lambda_1$ and so $\dim W_1 \geq 1$. If $p_{T|_{V_1}}(\lambda_1) = 0$ then there exists an eigenvector $v \in V_1$ of $T$ corresponding to the eigenvalue $\lambda_1$ but then $v \in W_1$, contradicting the fact that $W_1 \cap V_1 = \{ 0 \}$. Thus, $(x - \lambda_1)$ does not divide $p_{T|_{V_1}}(x)$ and so we must have
$$ p_{T|_{W_1}}(x) = (x - \lambda_1)^{n_1}, \,\,\, p_{T|_{V_1}} = \prod_{i=2}^t (x - \lambda_i)^{n_i}. $$
Finally, note that $\dim W_1 \geq 1$ implies that $\dim V_1 < \dim V$ continue the argument inductively on $T|_{V_1}$ obtaining the required decomposition.