Probability of getting HHT vs. HTT but using Markov Chains?

534 Views Asked by At

So I conduct interviews for candidates sometimes, and one of my go to questions is this classic question:

If you keep rolling a fair coin, what is the probability that you will get HHT before HTT (H = heads, T = tails)?

Now it is easily solved by using conditional probabilities by simply laying out the set of equations for each state. So by solving for probability of winning $P(W)$:

$$ P(W) = \frac{1}{2} P(W|H) + \frac{1}{2} P(W|T)$$

$$ P(W|H) = \frac{1}{2} P(W|HH) + \frac{1}{2} P(W|HT)$$

$$ P(W|T) = P(W) $$

and so on. The answer is known $P(W)=\frac{2}{3}$ and this question has been answered online a lot.


However, today a candidate had an approach i hadn't seen before. They tried to basically solve it all in one go, by using a Markov Chain and multiplying the probabilities of each transition together. They then "normalized both outcomes". Visually, here's what that looks like:

enter image description here

And then saying that to calculate probability of winning "unnormalized", which I will denote with a hat:

$$ P(\hat{W}) = P(\texttt{node_0 to node_1})*P(\texttt{node_1 to node_2})*P(\texttt{node_2 to node_3})$$

So each transition is 1/2, but $P(\texttt{node_2 to node_3})= 1-P(\texttt{node_2 to node_2})$, so it's $\frac{1}{1-\frac{1}{2}}$ because there's a cycle that messes things up i guess, so it's solved by "summing up the contributions of that cycle, which is a geometric series"

So final probability is:

$$ P(\hat{W}) = \frac{1}{2} * \frac{1}{2} * \frac{1}{1-1/2} = \frac{1}{2}$$

By the same logic for $P(\hat{L})$:

$$ P(\hat{L}) = P(\texttt{node_0 to node_1})*P(\texttt{node_1 to node_4})*P(\texttt{node_4 to node_5})$$

Because "there's a cycle between nodes 1 and 4, $P(\texttt{node_1 to node_4}) = \frac{1}{1-\frac{1}{4}}$". This follows from their earlier approach with $P(\texttt{node_2 to node_3})$ I guess? So:

$$P(\texttt{node_1 to node_4})= 1-P(\texttt{node_4 to node_4})= \frac{1}{1-\frac{1}{4}}$$

Putting this all together:

$$ P(\hat{L}) = \frac{1}{2} * \frac{1}{1-\frac{1}{4}} * \frac{1}{2} = \frac{1}{3}$$

Then to normalize, you do:

$$P(W) = \frac{P(\hat{W})}{P(\hat{W}) + P(\hat{L})}=\frac{\frac{1}{2}}{ \frac{1}{2} + \frac{1}{3}} = \frac{6}{10}$$.


  • All "quoted phrases" are directly from the candidate, so I don't quite understand them either

Uhm, at this point I was so lost and had to move on. Any suggestions on what the candidate was going for? I tried so much to help but they were too deep into this solution that I couldn't understand what they were going for. I messed around with their solution for a while after, and if we set $P(\hat{L})=\frac{1}{4}$ it works. But I still don't understand the flaw in their logic there.

Would love any feedback on how to actually solve it this way, and any pointers as to what this type of solution is called, because I had a hard time looking up this particular method of solving it, and I'd like to be a better interviewer and learn more while doing it.

1

There are 1 best solutions below

1
On BEST ANSWER

States 3 and 5 are the absorbing states of the Markov chain. The question is what is the probability of ending up in each state.

There are some simplifications we can make.

  • With probability 1, we will reach state 1. (Equivalently, the probability of never seeing a heads is zero.)
  • If we are in state 2, we will eventually reach state 3 with probability 1.

So, we can consider the simplified problem of starting in state 1, and asking about the probability of reaching state 2 before state 5.

The only way to reach state 2 from state 1 is to do the loop between states 1 and 4 a number of times, and then going to state 2. The sum of the probabilities of these outcomes is $$\sum_{k=0}^\infty \left(\frac{1}{2} \cdot \frac{1}{2} \right)^k \frac{1}{2} = \frac{1}{2} \cdot \frac{1}{1-\frac{1}{4}} = \frac{2}{3}.$$