I am kind of having trouble to conjure interpretation of $P(Y|\theta)$ for Bayesian statistics.
Set up an experiment of flipping coins in sequence of events. Let $Y$ be number of heads. I want $\theta$ being the probability of heads showing up. Then $P(Y|\theta)$ is reflected by "data collected". Now $\theta$ will follow some prior distribution $P(\theta)$. To estimate $\theta$, I consider posterior $P(\theta|Y)=\frac{P(Y|\theta)P(\theta)}{\int d\theta P(Y|\theta)P(\theta)}$ to obtain mean of $\theta$.
$\textbf{Q:}$ How would I collect such data? What experiment can I set up to measure $P(Y|\theta)$? Do I need a biased coin which can be tuned with probability of head being $\theta$? In reality, when one is given a single coin for experimentation, it does not seem possible to sample $P(Y|\theta)$ for all $\theta$ but only one $\theta$.