In 'Probability and Statistics' by A. Papoulis, the following graph is shown:
Note that $dx_2 < 0$.
Later, in a proof of the mean of a function of a random variable, this graph is referenced, only now all $dx > 0$ and the sets are:
From here, we get:
and so to:
My question (finally) is to do with the change in $dx_2 < 0$ to $dx_2 > 0$ and the change in the interval on the x-axis from $\{x_2 - |dx_2| < \boldsymbol{x} < x_2\}$ to $\{x_2 < \boldsymbol{x} < x_2 + dx_2\}$. I feel he slides over it, without spelling out why he has done it.
My thoughts are:
- The mean is found using integration. $dx$ is taken to be positive when we integrate.
- The function at $x_2$ is decreasing. So $dy$ will be negative for positive $dx$. But this differs from the $dy$ on the original graph:
this is a different $dy$ and a different interval $\{y - dy < \boldsymbol{y} < y\}$
- Rather than splitting up the integration at $x_1$, $x_2$ and $x_3$, it would make more sense to split the function so that it is piecewise monotonically increasing or decreasing to get around these issues.
Can anyone clarify my thoughts?
Thanks
Steven





