In the Bayesian setting, suppose that we know the PDF of $X$ given $\theta$ is $p_X(x \vert \theta)$ and the PDF of $Y$ given $\theta$ is $p_Y(y \vert \theta)$. In the standard fashion, we may assume some (continuous) prior distribution of $\theta$, say $g(\theta)$, which we can then use to find the posterior distribution of $\theta$ given $X = x$, denoted $\pi(\theta \vert x)$.
How does one find the PDF of $Y$ given $X = x$? In one of my old textbooks, have found the expression $$ q(y \vert x) = \int p_Y(y \vert \theta) \pi(\theta \vert x) \,d\theta $$ but without any derivation, and I am getting quite lost when proving it myself. Any guidance is appreciated.
My attempt: Let $f_{XY}(x,y)$ be the joint PDF of $X$ and $Y$ and $f_X(x)$ be the marginal PDF of $X$. We may then write $$ q(y \vert x) = \frac{f_{XY}(x,y)}{f_X(x)} = \frac{f_{XY}(x,y)}{ \int g(\theta_0) p_X(x \vert \theta_0) \,d\theta_0 } $$ This is where I think I am making a mistake in my derivation. If we write $$ f_{XY}(x,y) = \int g(\theta) p_X(x \vert \theta) p_Y(y \vert \theta) \,d\theta $$ we then have $$ q(y \vert x) = \frac{ \int g(\theta) p_X(x \vert \theta) p_Y(y \vert \theta) \,d\theta }{ \int g(\theta_0) p_X(x \vert \theta_0) \,d\theta_0 } = \int \left( \frac{ g(\theta) p_X(x \vert \theta) }{ \int g(\theta_0) p_X(x \vert \theta_0) \,d\theta_0 } \right) p_Y(y \vert \theta) \,d\theta = \int p_Y(y \vert \theta) \pi(\theta \vert x) \,d\theta $$ I get the desired result, but I am struggling to convince myself that I may express $f_{XY}(x,y)$ in such a fashion.
You are not exactly wrong to doubt. You would need some justification.
$~\displaystyle f_{X,Y}(x,y) = \int_\Bbb R g(\theta)\, f_{X\mid\Theta}(x\mid\theta)\,f_{Y\mid\Theta}(y\mid\theta)\,\mathrm d \theta~$ only when $X$ and $Y$ are conditionally independent given $\theta$.