Version of iterated expectations conditioned on subsets: Simple proof?

719 Views Asked by At

Thanks for any help with this. It is from the Stokey and Lucas (1988) Recursive Methods text (pg. 208) and uses notation from a Dynamic Modeling course taught at Carnegie Mellon and at Florida International University.

How do I sketch a detailed proof with the joint and conditional PDFs to show the following "general" version of the law of iterated expectations?

Let $I_t$ denote information available at time $t$ where $I_t \in I$. And let $\omega \subset I$ be a subset of the original information set. Then the expectation of a random variable, $x$ conditional on the realization $I_t \in I$ is denoted $E[x|I_t]$. The law of iterated expectations is

$$E[E[x|I_t]|\omega_t] = E[x | \omega_t]$$

The furthest I can get is:

$$\sum^{I_t} E[x|I_t] f(I_t|\omega_t) = \sum^{I_t}\sum^x x f(x|I_t)f(I_t|\omega_t) = \sum^{I_t}\sum^x x \frac{f(x,I_t)}{f(I_t)}\frac{f(I_t,\omega_t)}{f(\omega_t)}= ?$$

It's difficult to see how you end up with $f(x,\omega_t)$ (which you'd need to obtain $f(x|\omega_t)$) from the last term. It seems to come down to whether it's true that $f(I_t,\omega_t) = f(I_t)$ when $\omega \in I$. Thanks for any help.

2

There are 2 best solutions below

5
On

Here is a possible answer, but the full answer still depends on whether $f(I_t,\omega_t) = f(I_t)$ when $\omega \in I_t$ ($\omega$ is a subset of $I_t$). If so, here's a proof in the other direction. I'm just using $E[x | I_t, \omega_t ]$ instead of $E[x |I_t]$. Of course, this could come from $f(I_t,\omega_t) = f(I_t)$.

$$E[x | I_t, \omega_t] = \sum^x x f(x|\omega_t) = \sum^x x \frac{f(x,\omega_t)}{f(\omega_t)}=\sum^x x \frac{ \sum^{I_t}f(x,I_t, \omega_t)}{f(\omega_t)}=\sum^x x \frac{ \sum^{I_t}f(x|I_t, \omega_t)f(I_t, \omega_t)}{f(\omega_t)}=\sum^{I_t}\frac{f(I_t, \omega_t)}{f(\omega_t)}\sum^x xf(x|I_t, \omega_t)=\sum^{I_t}{f(I_t |\omega_t)}E[x|I,\omega_t)]=E[E[x|I_t, \omega_t]|\omega_t]$$

0
On

It seems that the question asked is the following:

Let $X$ denote an integrable random variable defined on a probability space $(\Omega,\mathcal F,P)$, $\mathcal G$ a sub-sigma-algebra of $\mathcal F$ and $A\in\mathcal G$ an event of positive probability, then $$E(E(X\mid \mathcal G)\mid A)=E(X\mid A).$$

To prove this, let $Y=E(X\mid \mathcal G)$, then $$E(Y\mid A)=\frac{E(Y\mathbf 1_A)}{P(A)},\qquad E(X\mid A)=\frac{E(X\mathbf 1_A)}{P(A)},$$ hence it suffices to show that $$E(Y\mathbf 1_A)=E(X\mathbf 1_A).$$ But the fact that this last identity holds (for every $A$ in $\mathcal G$) is (together with the fact that $Y$ is $\mathcal G$-measurable) exactly the definition of the conditional expectation $Y=E(X\mid \mathcal G)$, see the obvious reference.