Iterative bayesian inference on continuous distribution

62 Views Asked by At

My entire class of 7 masters engineering students is baffled by this sample question from a previous final exam in a pattern recognition course. The prof's advice was "try harder to follow the [extremely dissimilar] example in the text."

Question follows:

Given the following sets of sequentially applied data D={1,10,5,-1}, under the assumptions that

  • $p(x) \sim N(\mu=7.5,\sigma)$
  • $\theta = \sigma$
  • $\theta \in [-10,10]$
  • $\theta$ distributed with minimum information assumptions

From an initial guess of $\theta=1$, determine the estimated $p(\theta|D_k)$ produced by applying Bayes estimation for k=1,2,3,4.

Question ends.

It seems likely from context that he intends us to use iterative inference to determine a probability distribution for the unknown variance, using $$ p(\theta|D_n)=\frac{p(x_n|\theta)p(\theta|D_{n-1})}{\int p(x_n|\theta)p(\theta|D_{n-1}) d\theta} $$

but none of us have the foggiest idea how to apply this for the first iteration, to get a prior for the next. My best guess is that we're supposed to do something like this, but it seems... excessively complex for what is ultimately just 1 question of 20 on an exam:

  • $p(x_1|\theta) = p(x=1|\sigma) = \frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(1-7.5)^{2}}{2\sigma^{2}}}$ - posterior drawn from eqn of normal distribution
  • $p(\theta|D_{n-1}) = U[-10,10] = \frac{1}{20}$ - initial guess for minimum information
  • the total probability integral is $\int_{-10}^{10}\frac{1}{20}p(x=1|\sigma)d\sigma$

Integrating that normal PDF seems awfully messy. Is there a better method? Would it work to use log likelihood here?