The need for Lebesgue integral arises when finding things like ‘expectation’ and ‘variance’. Finding expectation or expected value involves summing over values a random variable takes weighted by probability. Now recall that probability is defined for events in the sample space, but random variables are function defined on sample space. So find this sum is like integration of a function, i.e. values taken by the random variable (y-axis) over probabilities (measure) defined on events in the sample space, i.e. $\sigma$ -field (x-axis).
What does a graph of this function look like? It has values of random variable on y-axis, okay. But what's exactly on x-axis? Random variable assigns values (y-axis) to individual outcomes (so that its inverse image belongs to the set of events) - random variable has to be measurable so that we know which event occured once we know the value of random variable. Random variable doesn't assign values to probabilities of individual events, which was described as x-axis in this case.
So the requirement that measurable functions is a natural requirement when talking about random variables. We can find probabilities (measure) of only those value of the random variable which can happen, i.e. belong to the $\sigma$-field generated by the sample space. Also, note that argued this way it is clear (why?) that there is no obvious way to partition the x-axis ala Riemann (probabilities of events in the sample space corresponding to the values taken by the random variable), and the only way one can integrate random variables is by starting on the y-axis (values taken by the random variable).
Again, I can't understand what's wrong with Riemann integration in this case. Could anyone explain these two quotes in more straightforward way, if possible?
Ad 1
Random variables are, indeed, measurable functions on the sample space. But the "shape" of these functions is not an issue in probability theory. It is the theoretical existence of measurable functions modelling the otherwise vague concept of randomness of numbers or other entities.
If you construct the sample space such that the Random variable is a function defined on $[0,1]$ to $R$ then the shape of the random variable is given by the shape of $F_X^{-1}(x)$ (the inverse of the distribution function of the random variable at stake). In this case the mechanism $$X=F_X^{-1}(Y)$$ will produce $X$ and $Y$ has to be a random variable whose distribution is uniform over $[0,1]$. You still have to explain what function $Y$ is. You can tell now that over $[0,1]$ we have the Lebesgue measure and $Y(x)=x$ as a function.
Example
Let the possible outcomes of a random experiment be real numbers over $[a,b]$ with uniform distribution.
Look at the following figure:
Here you can see the distribution function and its inverse. If you like the inverse of the distribution function is the random variable as a function over $[0,1]$. If we use the Lebesgue measure on $[0,1]$ then $F_X^{-1}$ IS the random variable, a real function with a nice shape. This method is general. You can imagine any distribution function; the inverse will exist and you can always take the Lebesgue measure over $[0,1]$ and you will be able to see the shape of the random variable as a function.
Ad 2
Think of he Dirichlet function taking the value zero on the rationals and one on the irrationals. The Lebesgue integral of this function is $1$. Because the L-measure of the irrationals is $1$ and the value of the function over that set is also $1$. The L-measure of the irrationals is $0$, so, independently of the function's value the contribution of this part is $0$.
The Riemann integral cannot say anything about the same. Why? Its, definition is tied to the refining sequence of intervals and the freedom of choosing sample values over the intervals. Since the irrationals and the rationals are dense amog the reals one can find Riemann limits being $0$ and $1$ and those that don't even exist.
So, I wouldn't say that it was probability theory that triggered the concept of Lebesgue integral. It was the natural development of the theory of functions and integrals that lead to what we call integral today.
EDIT
On the distribution of $X=F_X^{-1}(Y)$. $$P (F_X^{-1}(Y)<z)=P(Y<F(z))=F(z).$$
That is, $ X$'s distribution function is $F_X$.