I see the following in the paper:
$\{X(k)\}$ is a nonnegative random sequence.
If I know $$\sum_{k=1}^{\infty}E[X(k)] < \infty$$
By the monotone convergence theorem, this implies $$E[\sum_{k=1}^{\infty}X(k)] < \infty$$ and therefore $$\sum_{k=1}^{\infty}X(k)< \infty$$
My question is how we can get the second inequality by monotone convergence theorem?
From wiki, the monotone convergence theorem should look like:
I cannot see how to get the second inequality form this theorem.

The version of the monotone convergence theorem you have stated in your question is for discrete measures spaces and not for general measures spaces. The monotone convergence theorem goes like that:
In your setting $(X,\mathcal{A},\mu)$ equals the underlying probability space $(\Omega,\mathcal{A},\mathbb{P})$. If we set
$$U_n := \sum_{k=1}^n X_k,$$
then $0 \leq U_1 \leq U_2 \leq \dots$ as $X_k$ is non-negative for all $k \in \mathbb{N}$. Moreover,
$$U(\omega) = \sup_{n \in \mathbb{N}} U_n(\omega) = \sum_{k=1}^{\infty} X_k(\omega), \qquad \omega \in \Omega. \tag{2}$$ Applying the monotone convergence theorem yields
$$\begin{align*} \mathbb{E} \left( \sum_{k \geq 1} X_k \right) \stackrel{(2)}{=} \mathbb{E}(U) &\stackrel{(1)}{=} \sup_{n \in \mathbb{N}} \mathbb{E}(U_n) \\ &= \sup_{n \in \mathbb{N}} \sum_{k=1}^n \mathbb{E}(X_k). \end{align*}$$
In the last step we have used the linearity of the integral. Finally, as $\mathbb{E}(X_k) \geq 0$, we have
$$\sup_{n \in \mathbb{N}} \sum_{k=1}^n \mathbb{E}(X_k) = \sum_{k \geq 1} \mathbb{E}(X_k).$$
This finishes the proof.