Understanding Bayesian inference

57 Views Asked by At

I do have some experience with measure-theoretical probability, but fundamental statistics is not my forte. I have just realized that I might have had a wrong impression about how exactly Bayesian inference may help us.

As far as I understood, it definitely does help you in the following setup. Let's say you have $X \sim P_X = \mathcal N(\mu, \sigma^2)$ where the parameters have some specified prior distributions $\mu\sim P_\mu$ and $\sigma \sim P_\sigma$. In case we see an update $X = x$, we can update $P_\mu\to P'_\mu$ and $P_\sigma\to P'_\sigma$ using Bayesian inference to accomodate for the new info. In this case posterior distributions $P'_\mu,P'_\sigma$ would depend both on the observed data $x$ and on the corresponding priors $P_\mu, P_\sigma$.

What I wonder about is the following setup though. Let's say again you're given some $P_X$, just some particular single measure. You further observe a draw $X = x$. How can we use Bayes to update $P_X$ based on what we just have seen? In general of course we would see multiple draws $\{X = x_i\}_{i=1}^N$, but I guess for starters just one draw is sufficient to analyze.