Probability of knocking off all of a dragon's heads

1.5k Views Asked by At

You are fighting a dragon with three heads. Each time you swing at the dragon, you have a $20\%$ of hitting off two heads, a $60\%$ chance of hitting off one head and a $20\%$ of missing altogether. If you knock off one head, the head grows back immediately before the next iteration. If you miss, an additional head grows immediately before the next iteration. If you knock off two heads, the heads stay knocked off and you move to the next iteration. You win if you knock off all of the dragon's heads and the dragon wins if at any time it has five heads. What are the odds you win the game?

Anyone have a definitive answer for this? Was able to get an answer finding the probabilities for each potential number of heads and the chances of winning/losing within $3$ iterations, but wanted to see if it is right. Then if you get back to $3$ heads, multiply by $p$ through recursion. Can you ignore the $60\%$ as nothing in the game changes if this occurs?

I thought: $p$ = chance of two hits in a row + chance of two hits, one miss + probability of getting back to $3$ heads with two miss, $$\text{one hit}*p = (1/5*1/5) + 2\left(1/5*1/5*1/5\right) + 2\left(1/5*1/5*1/5\right)*p$$

Seems a bit low, likely because I'm leaving out that $60\%$.. but not sure where to include it. Any help would be greatly appreciated.

3

There are 3 best solutions below

5
On

Your intuition is correct: we can wholly forget about the lopping off one head that regrows immediately, at least if this applies even if there is only $1$ head left. However, we will not make use of that intuition, even though it simplifies things.

If the game is not over yet, then the dragon has $1$, $2$, $3$, or $4$ heads. Call these States $1,2,3,4$. Let $a,b,c,d$ respectively be the probabilities that you (ultimately) win if you are in States $1,2,3,4$ respectively.

We need to come to some decision about what happens if we are in State $1$. We interpret the wording to mean that with probability $0.2$ we win outright, with probability $0.6$ we stay in State $1$ (the head regrew, the fact that temporarily there was no head doesn't matter). There could be other interpretations, which can be dealt with by a minor modification of the equations below.

The analysis of what can happen if we are in State $1$ yields the equation $$a=0.2+(0.6)a+(0.2)b.$$

Now suppose we are in State $2$. If we lop off $2$ heads we win, if we lop off $1$ it grows back so we stay in State $2$, and if we miss we enter State $3$. Thus $$b=0.2+(0.6)b+(0.2)c.$$

Similar analyses give $$c=(0.2)a+(0.6)c+(0.2)d$$ and $$d=(0.2)b+(0.6)d.$$ Solve this system of linear equations. Note that they simplify to $$2a=1+b; \quad 2b=1+c;\quad 2c=a+d;\quad 2d=b.$$

0
On

Figured out an easier way. My original logic was correct, but since you can ignore the 60%, the problem simplifies to two choices each iteration, each with equal probability, so you can set both these probabilities to 1/2.

Then using the same math I used in my original post, you get:

p=(1/2∗1/2)+2(1/2∗1/2∗1/2)+2(1/2∗1/2∗1/2)∗p = 1/2 + 1/4*p = 2/3

1
On

This solution is a bit complex and uses the method that Andre was suggesting. States 0,1,2,3,4,5 are all different states representing the dragon's heads. Thanks

Good luck.

Thanks

Satish

This is a typical Markov chain with six states of which 0 and 5 are absorbing states by which I mean once on that state, you cannot move out. Remainder of them are non absorbing states which can infinitely move in and out. The first matrix is called the standard form of transition matrix. For example, it states the probability of moving from one state to the other. Absorbing states get 1 to itself and o elsewhere and other states gives the probability that you move from one state to the other. The whole standard form is split into Identity, Zero, F and Q. Then you find the limiting matrix by doing matrix algebra of finding (I-Q) inverse and multiplying that with F. The limiting matrix will give you the probability that it takes to move from the non absorbing states to the absorbing states. In the example below, it gives the probability of moving from the current state which is 3 to 0 ( when you will win) to be $\frac{2}{3}$.

This is a classical markov chain problem and using that to find the probability of movement.

enter image description here