In a different question, I had asked for clarification on the following problem where I wanted to just understand the problem. Now, I have attempted it and wish to know if my solution is right.
Problem statement:
A drawer contains two coins. One is an unbiased coin, which when tossed, is equally likely to turn up heads or tails. The other is a biased coin, which will turn up heads with probability $p$ and tails with probability $1 − p$. One coin is selected (uniformly) at random from the drawer. Two experiments are performed:
a) The selected coin is tossed $n$ times. Given that the coin turns up heads $k$ times and tails $n − k$ times, what is the probability that the coin is biased?
b) The selected coin is tossed repeatedly until it turns up heads $k$ times. Given that the coin is tossed $n$ times in total, what is the probability that the coin is biased?
My attempt:
a) Let $F$ be the set of outcomes where I have chosen the fair coin, and $B$ be the set of outcomes where I have chosen the biased coin. Let $A_k$ be the set of outcomes where I tossed $n$ times and got $k$ heads. I need to find $P(B|A_k)$.
$$P(A_k \cap F) = \frac{1}{2} {{n}\choose{k}}\frac{1}{2^n}$$ $$P(A_k \cap B) = \frac{1}{2} {{n}\choose{k}}p^k (1-p)^{n-k}$$ Since $F$ and $B$ partition the sample space, we have
$$P(A_k) = \frac{1}{2}{{n}\choose{k}} \left \{ p^k (1-p)^{n-k}+\frac{1}{2^n} \right \}$$
We from Bayes' theorem know that
$$P(B|A_k)=\frac{P(A_k|B)}{P(A_k)}$$
Hence we get
$$P(B|A_k)=\frac{p^k (1-p)^{n-k}}{p^k (1-p)^{n-k}+\frac{1}{2^n}}$$
b) In this case, we keep tossing till we get $k$ heads. Now this means that the last toss is a head. Now we know that we had to toss $n$ times to get $k$ heads.
As in (a) above, let $F$ be the set of outcomes where I have chosen the fair coin, and $B$ be the set of outcomes where I have chosen the biased coin. Let $C_n$ be the event that I had to toss $n$ times to get $k$ heads. I need $P(B|C_n)$.
$$P(C_n \cap F) = \frac{1}{2} \times \frac{1}{2} \times {{n-1}\choose{k-1}}\frac{1}{2^{n-1}}$$ $$P(C_n \cap B) = \frac{1}{2} \times p \times {{n-1}\choose{k-1}}p^{k-1} (1-p)^{n-k}$$
Again using Bayes' theorem along the lines of what was done in (a), we get
$$P(B|C_n)=\frac{p^k (1-p)^{n-k}}{p^k (1-p)^{n-k}+\frac{1}{2^n}}$$
I am a bit skeptic about my answer as the answers to (a) and (b) are turning out to be the same. I cannot find an intuitive explanation as to why that is.
Please provide feedback and let me know if I have solved this question correctly. In case there is a mistake, please point me to it.
Your calculations and arguments are correct. There is only a clerical error in a) in the line after you mention Bayes's theorem: You wrote
$$P(A_k|B)$$
in the enumerator when you meant (and had calculated before)
$$P(A_k\cap B)$$.
That both results are the same is a bit suprising, but then the ending state of both a) and b) are very similar: You have thrown $n$ coins and seen $k$ heads. The only difference is that in b) you know that the last coin is heads. But that doesn't change the general fact that the possible selections of which of those $n$ coints is heads or tails is the same for both the fair and unfair coin, and that this value cancel's out when yo do calulate the quotient.