Suppose you have a prior that is a mixture of two beta distributions, like so: prior = (0.5)dbeta(a1,b1) + (1-0.5)dbeta(a2,b2). Furthermore, suppose you just recently ran a binomial experiment with 'x' successes out of 'n' trials and want to calculate the posterior probability for theta, the true probability of success, using Bayesian methods. I know that the posterior should have the form: posterior = (p)dbeta(a1+x,b1+n-x) + (1-p)dbeta(a2+x,b2+n-x) where 'p' represents some mixing weight between 0 and 1. Here's where I get confused: I know that, usually, posterior = (prior)(likelihood). How do you get from the general equation for posterior to this new equation for posterior when the prior is a mixture of 2 beta distributions? Why is the equation not just: posterior = (0.5)(prior1)(likelihood) + (0.5)(prior2)(likelihood)? And finally, how do you solve for p, the posterior mixing weights?
Thanks in advance!