Using Bayes theorem, the joint posterior distribution is given by:
\begin{equation} p (\boldsymbol{ \theta}_{1:M}, \boldsymbol {\mu}, \Sigma_{\mu}|\boldsymbol{y}_{1:M})=\frac{p(\boldsymbol{\mu}, \Sigma_{\mu}) \prod^M_{i=1}p(\boldsymbol{y_i|\theta_i)}p (\boldsymbol{ \theta}_{i}| \boldsymbol {\mu}, \Sigma_{\mu})}{p(\boldsymbol{y}_{1:M})} \quad[1] \end{equation}
If you integrate the above equation over $\mu$ and $\Sigma$ are you getting the following?
\begin{equation} \int p(\boldsymbol {\mu}, \Sigma_{\mu}) \prod^M_{i=1} p(\boldsymbol{ \theta}_{i}| \boldsymbol {\mu}, \Sigma_{\mu})d_{\boldsymbol {\mu}}d_{\Sigma}=p(\theta_{1:M}) \quad [2] \end{equation}
If so, I don't understand how Eq[1] led to Eq[2]. Could you please explain?