Let $k$ be an arbitrary field, and let $V$ be an arbitrary $k$-vector space, possibly infinite-dimensional. Let $g\in\operatorname{End}_k(V)$. Then:
$g$ is diagonalizable if $V$ has a basis of eigenvectors for $g$;
$g$ is semisimple if $g$ is diagonalizable over $\overline{k}$;
$g$ is nilpotent if for each $v\in V$, there is some positive integer $N$ such that $g^N(v)=0$;
$g$ is unipotent if $g-1$ is nilpotent;
$g$ is locally finite if for each $v\in V$, the subspace $L_v:=\langle g^n(v)\mid n\in\mathbb{Z}_{\ge 0}\rangle$ is finite-dimensional.
To show: unipotent elements and semisimple elements are locally finite.
Let $g$ be unipotent, then $g-1$ is nilpotent. Take $v\in V$ and let $N\in\mathbb{N}$ such that $(g-1)^N(v)=0$. Then $\langle (g-1)^n(v)\mid n=0,1,\dots, N-1\rangle$, which is finite-dimensional, hence $g-1$ is locally finite. How can I deduce locally finiteness for $g$ specifically?
Let $g$ be semisimple: consider a $\overline{k}$-basis $(v_i)_i$ for $V$ consisting of eigenvectors of $g$ (say, $g(v_i)=\lambda_i$). Take $v\in V$, then $v=\sum_i \mu_iv_i$ (finite sum!) with $\mu_i\in\overline{k}$. Then $g^n(v)=\sum_i \mu_i\lambda_i^nv_i$, hence it is contained in the finite linear span of these $v_i$'s. Therefore $L_v=\langle v_i\mid i\text{ occurring in the finite sum}\rangle$ is finite dimensional. Is this alright?
Thanks.
In the unipotent case, the binomial theorem says that $$ (g-1)^N = \sum_{n=0}^N \binom{N}{n} g^n = \sum_{n=0}^{N-1} \binom{N}{n} g^n + g^N, $$ where we've peeled off the $N$th power term. Now $(g-1)^N(v) = 0$ is equivalent to $$ g^N(v) = -\sum_{n=0}^{N-1} \binom{N}{n} g^n(v). $$ This means that $g^N(v) \in \operatorname{span}_k \{v, g(v), g^2(v), \dots, g^{N-1}(v)\}$. By induction, you can show that $g^n(v) \in \operatorname{span}_k \{v, g(v), g^2(v), \dots, g^{N-1}(v)\}$ for all $n \in \mathbb{Z}_{\geq 0}$, hence $g$ is locally finite.
For the semisimple case, your $\overline{k}$-eigenbasis $(v_i)_i$ consists of vectors that are $\overline{k}$-linear combinations of vectors in $V$ (a $k$-vector space). This is not a basis for $V$ as a $k$-vector space. However, certain $\overline{k}$-linear combinations of these $v_i$ form a basis for $V$, but you should show it!
An example to illustrate this phenomenon: Let $k = \mathbb{R}$ and $L \cong \mathbb{R}^2$ be the $2$-dimensional subspace in $V$ spanned by $\{g^n(v)\}$, where $$ v = \begin{bmatrix} 1 \\ 0 \end{bmatrix} $$ and $g$ is the counterclockwise rotation by $\tfrac{\pi}{2}$, i.e. $$ g = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}. $$ Now, over $\overline{k} = \mathbb{C}$, the endomorphism is diagonalizable with eigenvalues $i$ and $-i$ and eigenvectors $$ w_1 = \begin{bmatrix} i \\ 1 \end{bmatrix} \quad\text{and}\quad w_2 = \begin{bmatrix} -i \\ 1 \end{bmatrix}, $$ respectively. These form a $\mathbb{C}$-basis for the subspace $\mathbb{C} \otimes L$, the vector space with extended field, but it's only a judiciously chosen $\mathbb{C}$-linear combination of these vectors that gives an $\mathbb{R}$-basis for $L$. Namely, $$ v_1 = -\tfrac{i}{2} \bigl( w_1 - w_2 \bigr) = \begin{bmatrix} 1 \\ 0 \end{bmatrix} $$ and $$ v_2 = \tfrac{1}{2} \bigl( w_1 + w_2 \bigr) = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. $$