I have a coin with unknown success probability $\theta$. I regard $\theta$ to be an independent random variable having uniform distribution $U(0,1)$. I am trying to understand how I can get a good estimate of $\theta$ using Bayes rule after each observation.
Let us say, in the first trial I sampled for $\theta$ from $U(0,1)$ to get $\theta = 0.4$ and observed that I got a success (head). For the second trial, I got $\theta = 0.2$ and observed that I got a fail. For the third trial, I got $\theta = 0.4$ and observed that I got a fail. How can I use this info to sequentially get an estimate of $\theta$.
I think general update rule is as follows.
$\pi_{n+1}(\theta) \propto \pi(\theta) f(x_n|\theta)$
I believe that $\pi$ is the $U(0,1)$, $\pi_{n+1}$ is the posterior distribution and $f$ is the Bernoulli. But I am confused at how to use the above equation to get the correct distribution estimate of $\theta$ from the above observations. Any one can help me please ?
The one part which confuses me is that since $\theta$ is assumed to be from $U(a,b)$, can't we simply take $\theta$ as the mean of $U(a,b)$ rather than assuming that $\theta$ is unknown but $U(a,b)$ is known. (BTW, this is not a made up question. I am following a work where they assume such a setting)
If $\theta$ changes before each flip, then on the next coin flip the previous flips give you no informaiton about the next $\theta$. It will still be Unif(0,1). But, if the $\theta$ is fixed (but unknown and assumed that anything between 0 and 1 are all equally likely) then after $n$ flips with $s$ successes (i.e. heads), this Beta distribution with mean $\frac{s+1}{n+2}$ is the posterior distribution of $\theta$.