I'm trying to prove rigorously that $\int_{\Omega}X\;dP=\int_{-\infty}^{\infty}xf(x)\;dx$. Where $f$ is the pdf of the random variable $X$.
I can't find a proof on the wikipedia article, or if it's there then it's disguised enough that I can't recognize it. Basically what I've got is a sort of semi-rigorous (and probably incorrect) proof of the equality, but maybe somebody could help me flesh out the details.
Proof:
Beginning from the definition that $E[X]:=\int_{\Omega}X\;dP$.
Given a random variable $X:\Omega\rightarrow\mathbb{R}$, we have the induced measure on $(\mathbb{R}, \mathscr{B}(\mathbb{R}))$ given by $P(\{X\in A\})$. Then by the Radon-Nikodym theorem there exists a measurable function $f:\mathbb{R}\rightarrow [0,\infty)$ such that $$P(\{X\in A\}) = \int_Afd\mu,$$
where $d\mu$ is the Lebesgue measure. From here it gets a bit hand-wavy. Basically since this measure was induced by the random variable $X$, then on $\mathbb{R}$ this random variable is simply given by the identity function $g(x)=x$. And thus by definition we write the expected value of $g$ with respect to our Radon-Nikodym produced measure in the form $$\int_{-\infty}^{\infty}x\;d\Big(\int_Afd\mu\Big).$$
Now by the Fundamental Theorem of Calculus, this becomes $$\int_{-\infty}^{\infty}xf(x)\;dx.$$
What does everyone think about this?
I'll try to provide a little more general result. Let $(\Omega,\ \mathcal{E},\ P)$ be a probability space and let $X\colon \Omega\longrightarrow \mathbb{R}$ be a random variable, i.e for each $I\in \mathcal{B}$, $X^{-1}(I)\in\mathcal{E}$, where $\mathcal{B}$ is the usual Borel $\sigma-$algebra on $\mathbb{R}$. Let us write $\mu:=\mu_{X}$ for the probability distribution of $X$, i.e for the measure defined on $\mathcal{B}$ by $\mu(I):=P(X^{-1}(I))$ for each $I\in\mathcal{B}$. Then the following holds.
(When I say that a Lebesgue integral for a measurable function exists, I allow that it is not finite). The proof of this fact is quite straightforward but it requires some measure theory results such as the approximation theorem with simple functions and the Lebesgue's monotone convergence theorem. Indeed, suppose first that $\phi$ is a (finitely) simple and positive function. Then also $\phi(X)$ is simple (and positive) and (therefore) both mentioned integrals always exist. Writing $\phi=\sum_{i=1}^{n} c_{i}1_{E_{i}}$, where $n=\vert \phi(\mathbb{R})\vert$, $\phi (\mathbb{R})=\{c_{1},\cdots,c_{n}\}$ and $E_{i}:=\phi^{-1}(\{c_{i}\})$, we get $$\int_{\mathbb{R}}\phi (x) \ d\mu=\sum_{i=1}^{n}c_{i}\mu (E_{i})=\sum_{i} c_{i}P(X^{-1}(E_{i}))=\sum_{i} c_{i}P(X^{-1}(\phi^{-1}(\{c_{i}\})))=\int_{\Omega} \phi(X)\ dP.$$ Assume now $\phi$ is a non-negative borelian function. Then there exists a non-decreasing sequence $(\phi_{n})_{n\in \mathbb{N}}$ of simple, positive functions such that $\lim\limits_{n\to \infty}\phi_{n}(x)=\phi (x)$ for every $x\in\mathbb{R}$. By monotone convergence theorem we get immediately that $$\int_{\mathbb{R}}\phi (x) \ d\mu=\int_{\mathbb{R}}(\lim_{n\to\infty}\phi_{n} (x)) \ d\mu =\lim_{n\to\infty} \int_{\mathbb{R}}\phi_{n} (x) \ d\mu=\lim_{n\to\infty} \int_{\Omega}\phi_{n} (X) \ dP=\int_{\Omega} \phi(X)\ dP.$$ Finally, suppose only $\phi\colon \mathbb{R}\longrightarrow\mathbb{R}$ is a borelian function, with no further restrictions. Then one can write $\phi=\phi^{+}-\phi^{-}$ (here for notation). Suppose $\int_{\mathbb{R}}\phi (x) \ d\mu$ exists: then at least one between $\int_{\mathbb{R}}\phi^{+} (x) \ d\mu$ and $\int_{\mathbb{R}}\phi^{-}(x) \ d\mu$ (say, the first one) must be finite and hence also $\int_{\Omega}(\phi(X))^{+} \ dP$ is finite, i.e $\int_{\Omega}\phi(X)\ dP$ exists. It is now clear that we can conclude with our thesis.