Proof of Expectation of Function of Random Variable

172 Views Asked by At

In 'Probability and Statistics' by A. Papoulis, the following graph is shown:

function of random variable

probability of y as sum of probability of x

Note that $dx_2 < 0$.

Later, in a proof of the mean of a function of a random variable, this graph is referenced, only now all $dx > 0$ and the sets are:

probability of y as sum of probability of x, dx2 positive

From here, we get:

yf(y) sum of g(x)f(x)

and so to:

mean of function of random variable

My question (finally) is to do with the change in $dx_2 < 0$ to $dx_2 > 0$ and the change in the interval on the x-axis from $\{x_2 - |dx_2| < \boldsymbol{x} < x_2\}$ to $\{x_2 < \boldsymbol{x} < x_2 + dx_2\}$. I feel he slides over it, without spelling out why he has done it.

My thoughts are:

  • The mean is found using integration. $dx$ is taken to be positive when we integrate.
  • The function at $x_2$ is decreasing. So $dy$ will be negative for positive $dx$. But this differs from the $dy$ on the original graph:

different dy

this is a different $dy$ and a different interval $\{y - dy < \boldsymbol{y} < y\}$

  • Rather than splitting up the integration at $x_1$, $x_2$ and $x_3$, it would make more sense to split the function so that it is piecewise monotonically increasing or decreasing to get around these issues.

Can anyone clarify my thoughts?

Thanks

Steven