Posterior distribution of binomial likelihood and mixture of beta and uniform prior

287 Views Asked by At

Let $X=X_1,X_2,...,X_n$ be a random sample from the binomial distribution such that $X_i|\theta\sim Bin(m,\theta)$ with m known. If $\theta\sim p*Unif(0,1)+(1-p)*Beta(a,b)$ with $a,b,p$ known, I'd like find the posterior $\theta|X$. I know that I have to use the Bayes theroem but I don't know how to deal with the fact that the distribution of $\theta$ is a mixture.

1

There are 1 best solutions below

0
On BEST ANSWER

Let $\bar{X}$ be the mean of the sample.

The posterior is a normalizing constant times
$$\left( p+(1-p) \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\theta^{a-1}(1-\theta)^{b-1} \right)\prod_{i=1}^{n}\left[ \theta^{X_i} (1-\theta)^{m-X_i} \right]$$ $$=p \theta^{n\bar{X}}(1-\theta)^{n(m-\bar{X})}+(1-p) \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\theta^{a+n\bar{X}-1}(1-\theta)^{b+n(m-\bar{X})-1}$$ $$=p \frac{\Gamma(n\bar{X}+1)\Gamma(n(m-\bar{X})+1)}{\Gamma(n\bar{X}+1+n(m-\bar{X})+1)} \frac{\Gamma(n\bar{X}+1+n(m-\bar{X})+1)}{\Gamma(n\bar{X}+1)\Gamma(n(m-\bar{X})+1)} \theta^{n\bar{X}+1-1}(1-\theta)^{n(m-\bar{X})+1-1}+(1-p) \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \frac{\Gamma(n\bar{X}+a)\Gamma(n(m-\bar{X})+b)}{\Gamma(n\bar{X}+a+n(m-\bar{X})+b)} \frac{\Gamma(n\bar{X}+a+n(m-\bar{X})+b)}{\Gamma(n\bar{X}+a)\Gamma(n(m-\bar{X})+b)} \theta^{a+n\bar{X}-1}(1-\theta)^{b+n(m-\bar{X})-1}$$

This is a mixture of two $Beta$ random variables.
The first is $Beta(n\bar{X}+1,n(m-\bar{X})+1)$ and the second is $Beta(n\bar{X}+a,n(m-\bar{X})+b)$.
The mixture probabilities (weights) are proportional to $$p \frac{\Gamma(n\bar{X}+1)\Gamma(n(m-\bar{X})+1)}{\Gamma(n\bar{X}+1+n(m-\bar{X})+1)} =p \frac{\Gamma(n\bar{X}+1)\Gamma(n(m-\bar{X})+1)}{\Gamma(nm+2)}$$ and $$(1-p) \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \frac{\Gamma(n\bar{X}+a)\Gamma(n(m-\bar{X})+b)}{\Gamma(n\bar{X}+a+n(m-\bar{X})+b)} =(1-p) \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \frac{\Gamma(n\bar{X}+a)\Gamma(n(m-\bar{X})+b)}{\Gamma(nm+a+b)}$$

The weights have to be multiplied by a constant so they add up to 1; i.e. divide each of them by their sum to find the normalized weights.