Markov Chains - Strong Markov Property

419 Views Asked by At

I'm struck with an exercise. I tried, but the results don't seem to fit to those proposed.

Exercise: Two players play the following game. The one who begins draws two cards from a deck of 40 cards (10 cards per suit):

  • if they are both clubs the player wins;
  • if they are of the same suit but not clubs, the player shuffles the cards and start again;
  • else the player shuffles the cards and let the other player play.

1) Model the game with a Markov Chain.

2) What is the probability that who started wins?

My attempt: If $X_n$ is the player in the current turn, we have a three state MC whose state space is $\{A,\, B,\, \text{exit}\}$. The exit state is the one reached when the game ends. The transition matrix is

$$P=\begin{pmatrix}9/52 & 10/13 & 3/52 \\ 10/13 & 9/52 & 3/52 \\ 0 & 0 & 1 \end{pmatrix}$$

since the probability of staying is $\frac{3 \cdot 9}{4 \cdot 39}$ and so on.

For the second question I thought I could use the Strong Markov Property (the one which states that a Markov Chain can start afresh in a Stopping Time) by using the last time I see A (or B) before the game ends. Starting from that point, the probability is just the jump from A (or B) to exit times two (to consider both the A and the B case).

What's wrong with this last point?

2

There are 2 best solutions below

1
On

First, I wouldn't modelise the markov chain like that , I would consider four states 1 = "player A play", 2 = "player B play", 3 = "player A had won" and 4 = "player B had won"

The transition matrix would be

$$M = \begin{pmatrix} 9/52 & 10/13 & 3/52 & 0 \\ 10/13 & 9/52 & 0 & 3/52 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$$

Then the probability to be at each state after n step knowing that we started in state 1) is given by

$$\pi^{(n)} = (1,0,0,0)M^n$$

And the probability we're looking for is

$$p = \lim_{n\to +\infty} \pi_3^{(n)}$$

To calculate $M^n$, you can diagonalize it, and it should give you the answer (I didn't do the calcul)

0
On

To solve question 2 you don't need Markov chains, only first step analysis. In the first step there are 3 possibilities:

  1. player $A$ wins,
  2. the game starts over,
  3. the deck goes to player $B$.

Letting $p_A$ be the probability that player $A$ is the eventual winner, and taking these 3 cases into account gives equation (1) below.

With $p={10\choose 2}/{40\choose 2}$ we get $$p_A=p\cdot 1 +3p\cdot p_A+(1-4p)\cdot(1-p_A).\tag 1$$ Solving (1) gives $p_A={43\over 83}= 0.51808$.