What is the probability that the coin was tossed $n$ times

827 Views Asked by At

Q: A coin is tossed untill k heads has appeared. If a mathematician knows how many heads appeared, can he figure out what is the probability that the coin was tossed $n$ times?

What I tried: The number of heads debends of the number of tosses. So I tried Bayes' theoream $P(X=k|N=n)P(N=n)/P(X=k)$ where $X$ is random variable for heads and $N$ for tosses. $P(X=k|N=n)$ is binomial distributions mass probability function, but i don't know what $P(N=n)$ and $P(X=k)$ are.

Because the last toss is known to be heads, we only focus on the the tosses before the last one. There is $\binom {n-1}{k-1}$ combinations for $k-1$ heads to appear before the last one. So the probability is

$$P(N=n|X=k) = \binom{n-1}{k-1}p^k(1-p)^{n-k}$$

I believe this can be done also with Bayes Theoream. Can someone wiser show how it is done?

2

There are 2 best solutions below

1
On

Bayes' Theorem would be useful if the experiment consisted of choosing $N$ first, before starting to count the heads, and then flipping the coin exactly $N$ times. Then we'd have

$$ P(N=n|X=k) =\frac{P(X=k|N=n)P(N=n)}{P(X=k)} =\frac{\binom nkp^k(1-p)^{n-k}P(N=n)}{\sum_{n=0}^\infty \binom nkp^k(1-p)^{n-k}P(N=n)}. $$

In this model, the probabilities $P(N=n)$ need to be specified, as they describe how we make our advance choice of how many times to toss the coin.

The experiment you describe is different: $k$ is a fixed parameter and every outcome ends with a total of exactly $k$ heads, so treating the number of heads as a random variable ("$X$") isn't useful. The random variable of interest is $N$: the number of tosses needed to get exactly $k$ heads.

In order to get $N=n$, we need to get exactly $k-1$ heads in the first $n-1$ tosses, in any order, followed by a head on the next toss. There are $\binom{n-1}{k-1}$ different $n$-toss result sequences that meet these conditions, and every such sequence has probability $p^k(1-p)^{n-k}$, so $P(N=n)=\binom{n-1}{k-1}p^k(1-p)^{n-k}$, as you computed.

In other words, $P(N=n)$ in the given experiment is equal to $P(\sum_{i=1}^{n-1}X_i=k,X_n=1)$ in a different experiment that consists of tossing a coin $n$ times, and in this experiment, $X=\sum_{i=1}^{n-1}X_i$ is binomially distributed. But the random variables $X$ and $N$ don't exist in the same experiment, so you can't relate them with conditional probabilities.

0
On

The coin toss process can be described in the context of independent and identically distributed (i.i.d.) Bernoulli trials.

Suppose we define a random variable corresponding to the number of trials required to have $k$ successes (i.e. $k$ heads in this setting). We say $N \sim NegBin(k, p)$ where $p$ is the probability of getting a head.

Sample space: $\{k, k+1, k+2, \dots\}$

Probability mass function (pmf): For $n = k, k+1, k+2, \dots,$ \begin{align*} f(n; k, p) = P(N = n) &= {n - 1 \choose k - 1} p^{k-1} (1-p)^{n-k} \cdot p\\ &= {n - 1 \choose k - 1} p^k (1-p)^{n-k}. \end{align*} Reasoning: If $n$ is the number of coin tosses (when $k$ heads were observed), then the first $(n-1)$ tosses resulted in $(k-1)$ heads and the last toss was a head.

Remark: More information on Negative binomial distribution can be found here. Be careful about the different formulations (see 1.3 Alternative formulations).