Let X, Y be positive random variables on a sample space Ω. Assume that X(ω) ≥ Y (ω) for all ω ∈ Ω. Prove that EX ≥ EY .

1.3k Views Asked by At

The problem is: Let $X$, $Y$ be positive random variables on a sample space $\Omega$. Assume that $X(\omega)\geq Y (\omega)$ for all $\omega \in \Omega$. Prove that $\operatorname{E} X \geq \operatorname{E}Y$ . I have a little bit confused about how to use ω here. I did not treat random variable as function before.

Thank you

2

There are 2 best solutions below

3
On BEST ANSWER

You have $\Pr(X-Y\ge0) =1$ and $\operatorname{E}(X-Y)=\operatorname{E}(X)- \operatorname{E}(Y)$.

If $\Pr(X-Y=0)=1$ then $\operatorname{E}(X-Y) = \operatorname{E}(0)=0.$ If $\Pr(X-Y=0)<1$ then $$ 0 < \Pr(X-Y>0) = \Pr(X-Y>1) + \sum_{n=1}^\infty \Pr\left( \frac 1 {n+1} < X-Y \le \frac 1 n \right), $$ so at least one term of that sum is strictly positive. So for some $n\in\{1,2,3,\ldots\}$ you have $$ \Pr\left(X-Y >\frac 1 n \right) = p > 0, $$ and thus $$ \operatorname{E}(X-Y) \ge \frac 1 n \cdot p > 0. \tag 1 $$

Postscript on the nature of expected value: One may speak of $X$ as a function from $\Omega$ into $\mathbb R$, but I think it's usually better to speak of the probability distribution of $X$ on subsets of $\mathbb R$. How, then, should one define the concept of expected value? For discrete random variables $X$ one writes $$ \operatorname{E}(X) = \sum_x x\Pr(X=x), $$ where $x$ runs through the set of all possible values that $X$ can take with positive probability. For other random variables one writes $$ \operatorname{E}(X) = \operatorname{E}(X\cdot1_{X\ge0} - \operatorname{E}( -X\cdot 1_{X<0}), $$ and one must then define $\operatorname{E}(X)$ for random variables $X$ that satisfy $\Pr(X\ge0)=1$.

The way to do that is first to say that for any finite set of numbers $0<a_1<\cdots<a_n$, every number smaller than $$ a_1 \Pr(a_2>X\ge a_1) + a_2 \Pr(a_3>X\ge a_1) + \cdots + a_n \Pr(X\ge a_n) $$ is too small to be the expected value, and then define the expected value as the smallest number that is not too small (and if all are too small, then the expected value is $\infty$).

This definition justifies line $(1)$ above. One could almost say that line $(1)$ above justifies this definition, although in addition to line $(1)$, one would probably want a characterization of expected value to mention linearity.

0
On

Andrew solved it in the comments: you just need the monotonicity of the integral. When using the integral you cover any cases (the random variables could be continuos or discrete).

Random variables are measurable functions, let's consider a measure space $(\Omega, \Sigma, \mu)$

So remember that $ X:\Omega \to \mathbb{R} $ and $ Y:\Omega \to \mathbb{R} $ (I used $ \mathbb{R}$ because you compare them) then if $X \geq Y$ $ \mathbf{E}[X] = \int_{\Omega} X(\omega) d\mu(\omega) \geq \int_{\Omega} Y(\omega) d\mu(\omega) = \mathbf{E}[Y] $ by monotonicity of the integral.

If you are not looking for a measure theoretic solution as the one provided above you can look at it like in the first courses of probability: $X$ and $Y$ are variables in the sense that they variate when different experiments are made. That is, for an experiment $\omega$, you will get $ X(\omega)$ and $ Y(\omega)$ but both random variables are 'using' the same experiment, so they should be weighed by the same value. That is (in a very informal way) the probability of $ \omega $ happening would be $ dP(\omega) = f(\omega) d\omega $ and so you need to get all the possible outcomes so $ \int_{\Omega} X(\omega) f(\omega) d\omega \geq \int_{\Omega} Y(\omega) f(\omega) d\omega $ but the integral in the left is the definition of the expected value of $X$ and the one in the right is the definition of the expected value of $Y$ so $ \mathbf{E}[X] \geq \mathbf{E}[Y] $.