Biased coin: Proof that heads has two third probability for second toss when first toss was heads

144 Views Asked by At

From the new book Weird Math:

While the Bayesian approach may seem subjective, it can be made rigorous in an abstract sense. For example, suppose you had a coin that was biased. It could be biased by any amount from 0 percent heads to 100 percent heads, with each value equally likely. You toss it once, and it comes up heads. It’s possible to prove that the probability of a head on the second toss is two out of three using Bayesian probability.

My question
How do you prove that? Could the same result be reached with the frequentist approach?

1

There are 1 best solutions below

0
On BEST ANSWER

Let $p=P(H)$ and assume that $p$ is uniformly distributed within $[0,1]$. I.e. the PDF is $f(p) = 1$ for $p \in [0,1]$ (and zero everywhere else).

Let $H_i$ be the event that the $i$th flip is Head. We want to calculate $P(H_2|H_1) = P(H_2 \cap H_1) / P(H_1)$.

  • Numerator: $P(H_2 \cap H_1) = \int^1_0 P(H_2 \cap H_1 | p)\ f(p)\ dp = \int^1_0 p^2 \cdot 1\ dp = \left[\frac{p^3}{3}\right]^1_0 = \frac{1}{3}$.

    • Note: for any specific value of $p, P(H_2 \cap H_1 | p) = p^2$ because flips are independent.
  • Denominator: $P(H_1) = \int^1_0 P(H_1 | p)\ f(p)\ dp = \int^1_0 p \cdot 1\ dp = \left[\frac{p^2}{2}\right]^1_0 = \frac{1}{2}$.

  • So $P(H_2|H_1) = P(H_2 \cap H_1) / P(H_1) = (\frac{1}{3}) / (\frac{1}{2}) = \frac{2}{3}$.

Note 1: the distribution of the bias itself, $p \sim U(0,1)$, is important. If other models are assumed, the result will be different. E.g. if $p$ is one of the three values $\{0, \frac{1}{2}, 1\}$ each with probability $\frac{1}{3}$ (i.e. "First randomly pick one of 3 coins in a bag..."), then $P(H_2 \cap H_1) = \frac{1}{3}(0 + \frac{1}{4} + 1) = \frac{5}{12}$ instead.

Note 2: $P(H_1) = 1/2$ could have been argued based on the symmetry of the bias itself, i.e. based on $f(p)$ being symmetric about $1/2$. In fact, $P(H_1) = \int^1_0 p\ f(p)\ dp = \mathbb{E}[p]$.

Note 3: For a fair coin $p = 1/2$ and obviously $P(H_2 \cap H_1) = 1/4$. But despite $f(p)$ being symmetric about $1/2$, i.e. the bias distribution being "fair" (equally likely to be biased either way, to equal amounts), nevertheless $P(H_2 \cap H_1) > 1/4$. This is where the "Bayesian"-ness comes in.

BTW, you may want to work through the example of 3 coins, one with both Heads, one with both Tails, and one fair. The answer is $5/6$ but more importantly the answer can be calculated with just combinatorics. The Bayesian idea is to have (or to assume) some prior "bag of (potentially uncountably infinitely many) coins".