Why is this wrong? Normalising constant by Monte Carlo integration.

85 Views Asked by At

I have confused myself with probabilities and could really use someone to point out where I went wrong, because I don't think my result can be correct!

Here is my train of thought:


Bayes' rule is:

$p(\theta | y) = \frac{p(\theta) p(y|\theta)}{p(y)}$

Now the usual reason people say this is hard is because we don't know $p(y)$. We can write:

$p(y) = \int p(\theta, y) d\theta$

Which is just using marginalisation.

Then:

$p(y) = \int p(\theta, y)d\theta = \int p(\theta) p(y|\theta)d\theta = \mathbb{E}_{\theta \sim p(\theta)}p(y|\theta)$

And then we just do Monte Carlo integration:

$p(y) = \mathbb{E}_{\theta \sim p(\theta)}p(y|\theta) \approx \frac{1}{N}\sum_{n=1}^{N}p(y|\theta_n)$

with $\theta_n$ drawn from $p(\theta)$.


If this is all correct, can't we just easily compute the normalising constant using Monte Carlo integration by drawing samples from the prior and calculating their likelihood? Where am I going wrong here? Does it converge very slowly, or is there a fundamental flaw in my derivation (maybe with my expectation values?)?

Thanks!

1

There are 1 best solutions below

0
On BEST ANSWER

You are indeed right; what you exhibit there is the basis of many Monte Carlo methods. For instance, in importance sampling, the importance weights are used to compensate for the unknown normalizing constant.

In particular, what you show there is exactly what is used in particle Markov chain Monte-Carlo methods to compute the normalization constant (the marginal likelihood) using particle filters.