My book has gives the following example: Given two independent random variables $X$ and $Y$ with exponential densities $f(x) = \lambda e^{-\lambda x}$ and $g(x) = \mu e^{-\mu x}$ (with CDF $G(x)$) respectively, $P(X < Y)$ is given by:
$$P(X<Y) = \int_0^\infty f(x)(1-G(x))dx$$
I don't understand the intuition behind this expression or how it was derived. Is $f(x)dx$ supposed to represent $P(X = x)$ (even though $f(x)$ is not a probability and $P(X = x)$ is 0) and $1-G(x)$ represents $P(Y > x)$ and integral sums up what is essentially $P(X = x)P(Y > x)$ for all values of $x$?
Is this equation also true for any $f$ and $g$ if we change the lower limit of integration should be $-\infty$?
The probability for an event is the expected value of the indicator random variable for the event.
$$\mathsf P(X<Y)=\mathsf E(\mathbf 1_{X<Y})$$
For independent exponential random variables, $X,Y$ , this expectation is the double integral of the indicator and the product of their probability density functions, over their joint support. The indicator random variable can of course be combined into the integral interval.$$\begin{align}\mathsf P(X<Y)&=\int_0^\infty\int_0^\infty \mathbf 1_{x<y}~f(x)~g(y)~\mathrm d y~\mathrm d x\\&=\int_0^\infty f(x)\int_x^\infty g(y)\mathrm d y~\mathrm dx\end{align}$$ Finally, that inner integral is precisely the survival function (ie 1 minus the CDF for $Y$ evaluated at point $x$).
$$\begin{align}\mathsf P(X<Y)&=\int_0^\infty f(x)~\big(1-G(x)\big)~\mathrm d x\end{align}$$
Yes. The above is being a little sneaky in using the support for the p.d.f. before unpacking them. That is okay.
But anyway, as long as $X,Y$ are independent continuous random variables.
$$\begin{align}\mathsf P(X<Y)&=\int_{-\infty}^\infty f(x)~\int_x^\infty g(y)\mathrm d y~\mathrm d x\\&=\int_{-\infty}^\infty f(x)~\big(1-G(x)\big)~\mathrm d x\end{align}$$