In Probability theory the expected value of a random variables $X : \Omega \rightarrow \mathbb{R}$ is defined as $E(X) = \int_\Omega X dP$ Now, if $\Omega \subset \mathbb{R}$ and has a density function, we know by Radon-Nikodym that $E(X) = \int_{\Omega} X dP = \int_{\Omega} X \frac{dP}{d\lambda} d\lambda,$ where $\lambda$ is the Lebesgue-measure. This actually enables us to calculate the expectation value.
Now, if $\Omega$ is some generalized set, then we also have that
$E(X) = \int_{\Omega} X dP = \int_{X(\Omega)} x dP_x= \int_{X(\Omega)}x \frac{dP_x}{d\lambda} d\lambda$. The last equality is again Radon-Nikodym, but what is about the second equality? This is a change of measure, where we transfer our integral from a measure on $\Omega$ to one on $X(\Omega)$. How is the theorem called that enables us to do such things or is this just something that you need to show the way you always prove something in measure theory?
The measure $P_X$ (not $P_x$) is the image measure of $P$ by $X$, defined by the identity $P_X(B)=P(X^{-1}(B))$ for every $B$ in $\mathcal B(\mathbb R)$ (and $X$ being a random variable is exactly the hypothesis one needs to be sure that $P(X^{-1}(B))$ exists). The fact that, for every measurable function $u$ suitably integrable, $$ \int_\Omega u(X)\,\mathrm dP=\int_\mathbb Ru(x)\,\mathrm dP_X(x), $$ is indeed a theorem, valid in full generality (in particular, no density with respect to Lebesgue measure is required). Its proof, as given in probably every measure theory course around the globe, goes by first noting that the identity holds true by definition when $u$ is an indicator function, then extending its validity to larger and larger classes of functions $u$ by linearity and by monotone convergence.