In an exercise, my teacher asked to approximate the expected value of $g(X)$ where X is a random variable with probability density function $f(x)$ using the Monte Carlo method. I thought about it, and came up with two ways of doing this:
Using $f(x)$ I can calculate the cumulative probability function $F(x)$ of $X$. Then I can simulate a large amount of numbers $\{u_i\}_1^n$ with distribution $U(0,1)$ and using the inverse function method, I can calculate $x_i=F^{-1}(u_i)$ and thus $\{x_i\}_i^n$ will be a set of numbers with distribution $X$. Then we can estimate $$E[g(X)]\simeq\frac{1}{n}\sum_i^ng(x_i)$$
We know that the expected value is given by the integral $$E[g(X)]=\int_{-\infty}^{\infty}g(x)f(x)dx$$ Using a change of variable, I was able to find an equivalent integral $$\int_0^1h(x)dx$$
Now, this integral is equal to $E[h(U)]$ where $U$ has uniform distribution $U(0,1)$. We can simulate $\{u_i\}_1^n$ with distribution $U(0,1)$ and then approximate the integral as: $$E[g(X)]=\int_0^1h(x)dx\simeq\frac{1}{n}\sum_{i=1}^nh(u_i)$$
My questions are: Are both these methods correct? If so, are they equivalent or is one better then the other in estimating the expected value?