How do you infer the success probability of a Bernoulli random variable from independent samples

65 Views Asked by At

Let's say we have a coin (not necessarily fair) and we flip it 100 times and all of the outcomes were tails. We can immediately conclude that the probability of getting tails is not 0 and we intiutively expect the probability of getting tails to be high. I expected to prove that the probability of the event that the probability of tails is bigger than 0.7 is at least 0.9 however, since there are no underlying probability distrubution on the probability of tails that I can see, I failed. What is it that I am missing or is it not possible to infer anything from experiments? If not, why?

Notes:

1) I saw Fair dice probability problem however there is no answer to the question there

2) I know likelihood functions but they do not make probabilistic arguments. I am trying to give a confidence interval about the probability of the tails.

2

There are 2 best solutions below

0
On BEST ANSWER

You say you are trying to give a confidence interval about the probability of the tails. Strictly speaking confidence intervals do not make statements about the probability of the true value being in the particular stated interval

There are many possible methods of providing a confidence interval for a binomial proportion, each with their own properties

Observing $n$ cases all the same, this can raise issues when a particular approach attempts to give a two-tailed interval. Instead, a simple approach is the so-called rule of three, with a $95\%$ confidence interval of $\left[0,\frac3n\right]$ or $\left[1-\frac3n,1\right]$, so in your example with $n=100$ you would get a confidence interval of $[0.97,1]$

0
On

The phrase "the probability of the event that the probability of tails is bigger than..." makes only sense in a Bayesian framework. That is, you regard the parameter of your distribution as a random variable.

Then you'd write $$P(\theta \mid Y)=\frac{P(Y\mid \theta) P(\theta)}{P(Y)}=\frac{P(Y\mid \theta) P(\theta)}{\int P(Y\mid \theta) P(\theta) d\theta}$$

where $Y$ is the observation and $P(\theta)$ is the a priori distribution of your parameter. Without an a priori, we cannot go on.

If in this case you assume a uniform $P(\theta)$, and $\theta=P(Tail)$ then

$$P(\theta \mid Y)= 101 \, \theta^{100} $$ which is a Beta distribution, highly concentrated around $1$. In this case $P(\theta > 0.95 \mid Y) = 0.9943...$