If $X$ is a discrete random variable that can take the values $x_1, x_2, \dots $ and with probability mass function $f_X$, then we define its mean by the number $$\sum x_i f_X(x_i) $$ (1) when the series above is absolutely convergent.
That's the definition of mean value of a discrete r.v. I've encountered in my books (Introduction to the Theory of Statistics by Mood A., Probability and Statistics by DeGroot M.).
I know that if a series is absolute convergent then it is convergent, but why do we need to ask for the series (1) to converge absolutely, instead of just asking it to converge? I'm taking my introductory courses of probabilty and so far I haven't found a situation that forces us restrict ourselves this way.
Any comments about the subject are appreciated.
It's because if the series is convergent but not absolutely convergent, you can rearrange the sum to get any value. Any good notion of "mean" or "expectation" should not depend on the ordering of the $x_i$'s.
For a more abstract reason, note that we define the expectation $E[X]$ of a random variable $X$ defined on a probability space $(\Omega, \mathcal{F}, P)$ as the Lebesgue integral $\int_{\Omega} X dP$. By definition of the Lebesgue integral, this is only well-defined if the integrand is absolutely integrable. If you learn more about measure theory, you will also learn why this definition makes sense. It is done to avoid strange situations like $\infty - \infty$ in the theory.