Bayesian iterated expectation

287 Views Asked by At

Firstly i'm familiar with the law of iterated expectations i.e. $\mathbb{E}(\theta) = \mathbb{E}(\mathbb{E}(\theta|Y))$ in the Bayesian context we can think of $\mathbb{E}(\theta|Y)$ as the posterior expectation of the unknown parameter $\theta$.

The book that that i'm reading considers the normal-normal model. Where $\theta$ is normally distributed with fixed hyperparameters $\mu_0,\tau_0^2$ and the data $y|\theta \sim N(\theta,\sigma^2)$ where $\sigma$ is known.

I'm stuck on the following part of the book which looks at the posterior predictive distribution of $P(\tilde{y}|y)$

enter image description here

Equations 2.8 and 2.9 refer to the basic iterated expectation and variance rules. I'm confused as to how they have calculated the first equality in $\mathbb{E}(\tilde{y}|y)$? I.e why is $\mathbb{E}(\tilde{y}|y) = \mathbb{E}(\mathbb{E}(\tilde{y}|\theta,y)|y)$ can someone provide some justification or proof of this fact?

1

There are 1 best solutions below

6
On

Firstly i'm familiar with the law of iterated expectations. $\mathsf E(\theta)=\mathsf E(\mathsf E(\theta\mid Y))$

why is $\mathsf E(\tilde y\mid y)=\mathsf E(\mathsf E(\tilde y\mid θ,y)\mid y)$ can someone provide some justification or proof of this fact?

It is called the Law of Iterated Expectation, because it is iterative. $$\begin{align}\mathsf E(X)&=\mathsf E(\mathsf E(X\mid Y))\\[1ex]&=\mathsf E(\mathsf E(\mathsf E(X\mid Y,Z)\mid Y))\end{align}$$