Thanks for any help with this. It is from the Stokey and Lucas (1988) Recursive Methods text (pg. 208) and uses notation from a Dynamic Modeling course taught at Carnegie Mellon and at Florida International University.
How do I sketch a detailed proof with the joint and conditional PDFs to show the following "general" version of the law of iterated expectations?
Let $I_t$ denote information available at time $t$ where $I_t \in I$. And let $\omega \subset I$ be a subset of the original information set. Then the expectation of a random variable, $x$ conditional on the realization $I_t \in I$ is denoted $E[x|I_t]$. The law of iterated expectations is
$$E[E[x|I_t]|\omega_t] = E[x | \omega_t]$$
The furthest I can get is:
$$\sum^{I_t} E[x|I_t] f(I_t|\omega_t) = \sum^{I_t}\sum^x x f(x|I_t)f(I_t|\omega_t) = \sum^{I_t}\sum^x x \frac{f(x,I_t)}{f(I_t)}\frac{f(I_t,\omega_t)}{f(\omega_t)}= ?$$
It's difficult to see how you end up with $f(x,\omega_t)$ (which you'd need to obtain $f(x|\omega_t)$) from the last term. It seems to come down to whether it's true that $f(I_t,\omega_t) = f(I_t)$ when $\omega \in I$. Thanks for any help.
Here is a possible answer, but the full answer still depends on whether $f(I_t,\omega_t) = f(I_t)$ when $\omega \in I_t$ ($\omega$ is a subset of $I_t$). If so, here's a proof in the other direction. I'm just using $E[x | I_t, \omega_t ]$ instead of $E[x |I_t]$. Of course, this could come from $f(I_t,\omega_t) = f(I_t)$.
$$E[x | I_t, \omega_t] = \sum^x x f(x|\omega_t) = \sum^x x \frac{f(x,\omega_t)}{f(\omega_t)}=\sum^x x \frac{ \sum^{I_t}f(x,I_t, \omega_t)}{f(\omega_t)}=\sum^x x \frac{ \sum^{I_t}f(x|I_t, \omega_t)f(I_t, \omega_t)}{f(\omega_t)}=\sum^{I_t}\frac{f(I_t, \omega_t)}{f(\omega_t)}\sum^x xf(x|I_t, \omega_t)=\sum^{I_t}{f(I_t |\omega_t)}E[x|I,\omega_t)]=E[E[x|I_t, \omega_t]|\omega_t]$$