There is a probability space $(\Omega,\mathrm{B},P)$. Let $Y$ be a positive random variable with $E(Y)=1$. Define $P^*(A) = \int_A Y\,dP.$ Show that
a. $E^*(X)=E(XY)$ for any random variable $X\geq 0$.
b. For any $X\geq 0$ that $$ E^*(X|Z_1,\cdots,Z_n) = \frac{E(XY|Z_1,\cdots,Z_n)}{E(Y|Z_1,\cdots,Z_n)} $$ for any random variables $Z_1,\cdots,Z_n$.
My attempt:
a. $E^*(X)=\int_\Omega X\,dP^* = \int_\Omega X \int_{dw}Y\,dP = \int_\Omega XY\,P(dw)$.
$E(XY)=\int_\Omega XY\,P(dw)$.
This proves part a. My question: Why do we require $X$ to be non-negative? Please provide any hints if my approach has some flaws in it.
b. I was attempting to solve this using Question on conditional expectation like chain rule, but alas, this approach seems to be wrong.
So, any hints to solve this problem will be appreciated! Thanks in advance.
a. Seems just flawed. For instance, what does $dw$ even mean? Try the following. Consider $\mathscr{H}_+$ the set of random variables $X \geq 0$ for which $\mathbf{E}^*(X) = \mathbf{E}(XY).$ Then $\mathscr{H}_+$ contains all indicator functions of events (by definition), is a closed under addition, positive-scalars multiplication and closed under monotone limits. Hence, $\mathscr{H}_+$ contains all non-negative random variables (for we recall if $X$ is such a random variable, then $X$ equals, a.s., a monotone limit of simple positive r.v.). Now, in regards as to why $X$ has to be non-negative, it is just for the expectations to exist. You could also ask $X$ to be bounded.
b. Consider now the sigma algebra $\mathscr{X}$ generated by $Z_1, \ldots, Z_n.$ By definition, $\mathbf{E}^*(X \mid Z_1, \ldots, Z_n)$ is any random variable, measurable with respect to $\mathscr{X},$ for which the following relation holds: if $\mathrm{E} \in \mathscr{X}$ then $$\int\limits_{\mathrm{E}} \mathbf{E}^*(X \mid Z_1, \ldots, Z_n) d\mathbf{P}^* = \int\limits_{\mathrm{E}} X d\mathbf{P}^*.$$ However, by virtue of a., we know that the right hand side above equals $\int\limits_\mathrm{E} XY d\mathbf{P}$ which, in turn, equals $$\int\limits_\mathrm{E} \mathbf{E}(XY \mid Z_1, \ldots, Z_n) d\mathbf{P} = \int\limits_\mathrm{E} \dfrac{\mathbf{E}(XY \mid Z_1, \ldots, Z_n)}{\mathbf{E}(Y \mid Z_1, \ldots, Z_n)} \mathbf{E}(Y \mid Z_1, \ldots, Z_n) d\mathbf{P}.$$ Apply the result of a. to the probability space $(\Omega, \mathscr{X}, \mathbf{P})$ using $\mathbf{E}(Y \mid Z_1, \ldots, Z_n)$ in lieu of $Y$ (recall $\mathbf{E}(\mathbf{E}(Y \mid Z_1, \ldots, Z_n)) = \mathbf{E}(Y)=1$ and $\mathbf{E}(Y \mid Z_1, \ldots, Z_n) \geq 0$) to get $$\int\limits_{\mathrm{E}} \mathbf{E}^*(X \mid Z_1, \ldots, Z_n) d\mathbf{P}^* = \int\limits_\mathrm{E} \dfrac{\mathbf{E}(XY \mid Z_1, \ldots, Z_n)}{\mathbf{E}(Y \mid Z_1, \ldots, Z_n)} d\mathbf{P}^{**},$$ where $\mathbf{P}^{**}$ is the probability measure on $\mathscr{X}$ such that $\mathbf{P}^{**}(\mathrm{A}) = \int\limits_\mathrm{A} \mathbf{E}(Y \mid Z_1, \ldots, Z_n) d\mathbf{P}$; by definition of conditional expectation, $\mathbf{P}^{**}(\mathrm{A})=\mathbf{P}^*(\mathrm{A})$ for all $\mathrm{A} \in \mathscr{X},$ that is $\mathbf{P}^{**} = \mathbf{P}^*.$ This substantiates b. Q.E.D.