How do I know when to use Beta or is this a simple question of posterior distribution using bayesian theorm?

25 Views Asked by At

enter image description here

Please help to guide me along the questions as I am confused on how to start off. Thank you!

1

There are 1 best solutions below

0
On BEST ANSWER

Some thoughts to try to help:

There is initial knowledge of the parameter $\theta$, it is uniformly distributed on $\left[0,1\right]$, $\pi\left(\theta\right)$. Given $\theta$, each $X_{i}$ has it's own distribution $\pi(X_{i}|\theta)$. Now we can use the observed data and Bayes' theorem, to determine a posterior distribution.

\begin{align*} \pi\left(\theta|X_{1}=1,X_{2}=1,\dots,X_{10}=1\right)&=\frac{\pi(X_{1}=1,X_{2}=1,\dots,X_{10}=1|\theta)\pi(\theta)}{\pi(X_{1}=1,X_{2}=1,\dots,X_{10}=1)} \end{align*}

The numberator of the right hand side is known, I'm not sure how to determine the denominator of the right hand side, however it is a constant that is independent of $\theta$, so one can work with the unnormalized distribution.

\begin{align*} \pi\left(\theta|X_{1}=1,X_{2}=1,\dots,X_{10}=1\right)&\propto Z(\theta)\\ Z(\theta)&:=\pi(X_{1}=1,X_{2}=1,\dots,X_{10}=1|\theta)\pi(\theta)\\ C&:=\int_{0}^{1}Z(\theta)\mathrm{d}\theta\\ \end{align*}

From this one can determine the posterior distribution

\begin{align*} \pi\left(\theta|X_{1}=1,X_{2}=1,\dots,X_{10}=1\right)&=\frac{1}{C}Z\left(\theta\right)\\ \end{align*}

Note: the observable random variables are conditionally iid given the parameter $\theta$ and so

\begin{align*} \pi(X_{1}=1,X_{2}=1,\dots,X_{10}=1|\theta)&=\pi(X_{1}=1|\theta)\pi(X_{2}=1|\theta)\dots\pi(X_{10}|\theta)\\ \end{align*}

Given $\theta$ $X_{j}$ is a Bernoulli random variable and so use the appropriate pdf for each $\pi(X_{j}=x_{j}|\theta)$.