Spot the error in this derivation! Multiple markov chains 'racing' on the same state space.

24 Views Asked by At

I apologize in advance for the notation, it is not the prettiest, but notation rarely is.

Let {X(t)} and {Y(t)} be 2 independent discrete-time markov chains on the same state space, where one of the states is of particular interest (Each one has a different transition matrix but that is not really relevant)

I want to calculate the probability that {X(t)} reaches the 'goal' state (denote it s) before {Y(t)} does, (a draw is a loss for X(t)). Here is what I have so far:

Let N be a random variable for the time it takes for either of them to reach the goal, i.e for the race to end. $$\mathbb{P}(X wins) = \mathbb{P}(X(N)=s, Y(N) \neq s) = \sum_{n=0}^\infty \mathbb{P}(X(N)=s, Y(N) \neq s | N=n)\mathbb{P}(N=n) = $$ $$ \sum_{n=0}^\infty \mathbb{P}(X(n)=s, Y(n) \neq s)\mathbb{P}(N=n) = \sum_{n=0}^\infty \mathbb{P}(X(n)=s)\mathbb{P}( Y(n) \neq s)\mathbb{P}(N=n) $$

We can easily find $\mathbb{P}(X(n)=s)$ and $\mathbb{P}(Y(n)=s)$ by use of the n-step transition matrix, for instance $\mathbb{P}(X(n)=s) = (\mu^{(0)} \cdot M_X^n)_s$, that is, the s'th component of the product where $\mu^{(0)}$ is the vector of the probability mass function of {X(t)} and $M_X$ is its transition matrix.

We can also find $\mathbb{P}(N=n)$ by thinking of N as the minimum of 2 random variables, $N_x$ and $N_y$, where each represent the time it takes for X and Y to reach the goal, respectively. Then, $$\mathbb{P}(N=n) = \mathbb{P}(min(N_x, N_y)=n) = \mathbb{P}(N_x =n, N_y \geq n) + \mathbb{P}(N_x < n, N_y =n) =$$ $$ \mathbb{P}(N_x =n)(1-\sum_{i=0}^{n-1}\mathbb{P}(N_y =i)) + (1-\sum_{i=0}^{n}\mathbb{P}(N_x =i))\mathbb{P}(N_Y =n)$$

Since the event that $N_x=n$ means that X(t) ends the race in exactly n steps, implying that state s had not been visited before that (else the race would have finished before n steps), we can think of this as the first passage time, denoted $f_{o,s}(n)$ where o denotes the given starting state (and $g_{o,s}(n)$ for Y(t)). I used some code to find all of them, so substituting them into the formula is not a problem. This means our formula looks like:

$$ \mathbb{P}(X wins) = \sum_{n=0}^\infty (\mu_X^{(0)} \cdot M_X^n)_s(1-(\mu_Y^{(0)} \cdot M_Y^n)_s)(f_{o,s}(n)(1-\sum_{i=0}^{n-1}g_{o,s}(i)) + (1-\sum_{i=0}^{n}f_{o,s}(i))g_{o,s}(n))$$

Now, I have computed a few small examples by hand and the actual probability X wins does not match the result of this formula, meaning there is a mistake somewhere, but I cannot for the life of me find where it is. Are you up for the challenge?

1

There are 1 best solutions below

2
On

The mistake is that $\mathbb{P}(X(N)=s,Y(N)\ne s | N = n) \ne \mathbb{P}(X(n)=s,Y(n) \ne s)$ because you are removing the information about $N$ in the second probability. It would be correct to say $\mathbb{P}(X(N)=s,Y(N)\ne s | N = n) = \mathbb{P}(X(n)=s,Y(n) \ne s|N=n)$, but you can't drop the conditioning.

Just thinking intuitively, it's possible that $\mathbb{P}(X(n)=s)$ and $\mathbb{P}(Y(n)=s)$ are very small for any fixed $n$, say $\mathbb{P}(X(n)=s) + \mathbb{P}(Y(n)=s) \le 2 \varepsilon$, but if we know that $N=n$ then we know either $X(n)=s$ or $Y(n)=s$, so $\mathbb{P}(X(n)=s|N=n) + \mathbb{P}(Y(n)=s|N=n) = 1.$ Basically, knowing that $N=n$ makes it much more likely that $X(n)=s$, simply because we know that either $X(n)$ or $Y(n)$ has to be $s$.