I am reading this set of probability theory notes from Stanford:
http://statweb.stanford.edu/~adembo/stat-310b/lnotes.pdf
And I was wondering if someone could help me understand his proof of proposition 4.2.4. page 157.
Question 1:
"Hence in this case: \begin{equation*} E\left[ Y I_{Y\le 0}\right] = 0 \end{equation*} That is, $Y \ge 0$ almost surely."
I cannot see for the life of me why this is the case....I know the fact:
If A is a measurable set with measure zero, i.e. $\mu(A) = 0$, then the integral of a corresponding measurable random variable over this set is equal to 0. But to my knowledge I am not aware that the converse of this statement is true!
Question 2: Also when he writes: "...so $P(X>0, Y = 0) = 0$...should it instead be that $P(X>0, Y \le 0)$?
I am also quite uncertain as to how this last statement implies that $Y> 0$ a.s.
My gut tells me that if the above "corrected" statement was true, then because $P(X>0) = 1$, this must imply that that $P(Y\le 0) = 0$...
Thanks for the help in advanced!
Question 1: The point is that if $P(Y < 0) > 0$, then $$0 \leq E(XI_{Y<0}) = E(YI_{Y < 0}) < 0$$ is a contradiction. Hence, we must have $P(Y \geq 0)=1$ whenever $P(X \geq 0)=1$.
Question 2: Note that $X > 0$ a.s. implies $Y \geq 0$ a.s. by part 1. So to show that $Y > 0$ a.s. it remains to show that $P(Y=0)=0$. Since $P(X > 0)=1$, then by what you've written $0 = P(X > 0, Y=0) = P(Y=0)$.