Firstly i'm familiar with the law of iterated expectations i.e. $\mathbb{E}(\theta) = \mathbb{E}(\mathbb{E}(\theta|Y))$ in the Bayesian context we can think of $\mathbb{E}(\theta|Y)$ as the posterior expectation of the unknown parameter $\theta$.
The book that that i'm reading considers the normal-normal model. Where $\theta$ is normally distributed with fixed hyperparameters $\mu_0,\tau_0^2$ and the data $y|\theta \sim N(\theta,\sigma^2)$ where $\sigma$ is known.
I'm stuck on the following part of the book which looks at the posterior predictive distribution of $P(\tilde{y}|y)$
Equations 2.8 and 2.9 refer to the basic iterated expectation and variance rules. I'm confused as to how they have calculated the first equality in $\mathbb{E}(\tilde{y}|y)$? I.e why is $\mathbb{E}(\tilde{y}|y) = \mathbb{E}(\mathbb{E}(\tilde{y}|\theta,y)|y)$ can someone provide some justification or proof of this fact?

It is called the Law of Iterated Expectation, because it is iterative. $$\begin{align}\mathsf E(X)&=\mathsf E(\mathsf E(X\mid Y))\\[1ex]&=\mathsf E(\mathsf E(\mathsf E(X\mid Y,Z)\mid Y))\end{align}$$