Adding distances/weights to absorbing markov chain

562 Views Asked by At

in presence of an absorbing state, I want to calculate mean/expected 'distance' from any state to that absorbing state. What I mean by distance is that I want to give different lengths from one particular state to another.

For example, below i wrote a simple coin toss, 2 levels deep. One player is in Start S, with a coin toss he goes A or B. If he reaches A, with %100 probability he reaches End after. But if he reaches B, he either reaches End or he turns back to Start with equal probability.

\begin{align*} \begin{pmatrix} & S & A & B & E \\ S & 0 & \frac{1}{2} & \frac{1}{2} & 0\\ A & 0 & 0 & 0 & 1 \\ B & \frac{1}{2} & 0 & 0 & \frac{1}{2} \\ E & 0 & 0 & 0 & 1 \end{pmatrix} \end{align*}

Later I found total transition matrix between transient states

\begin{align*} (I-Q)^{-1} = \left(\begin{array}{rr} \frac{4}{3} & \frac{2}{3} & \frac{2}{3} \\ 0 & 1 & 0 \\ \frac{2}{3} & \frac{1}{3} & \frac{4}{3} \end{array}\right) \end{align*}

So if I sum first row, I find 8/3 , that is expected number of steps from Start to End

Now where I am stuck is, probabilities stay same but if going from one state to another differs, I dont know how to manipulate these matrices to take distances.

For example, distance from A-> E may be 1 km, but B -> E may be 5 kms. S-> A may be 20 kms, rest can be 1 km. How can I manipulate these matrices to get this information. Thanks in advance

1

There are 1 best solutions below

2
On BEST ANSWER

One way to approach this is to create another Markov chain where the states are the transitions that have positive probability in the original Markov chain, plus one state for the initial state. We only need to include transitions from transient states since that's all we're interested in. So we exclude the transition $E$ to $E$. In your example we would have states: $S,SA,SB,AE,BS,BE$. Our transition matrix, $F$, for this new Markov chain is:

\begin{align*} F = \begin{pmatrix} & S & SA & SB & AE & BS & BE \\ S & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ SA & 0 & 0 & 0 & 1 & 0 & 0 \\ SB & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ AE & 0 & 0 & 0 & 0 & 0 & 0 \\ BS & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ BE & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \end{align*}

Entry $F_{i,j}$ in matrix $(I-F)^{-1}$ represents the expected total number of visits to state $j$ beginning in state $i$ (states $i,j$ being states in the new Markov chain), using the same reasoning as for the $Q$ matrix in the original Markov chain. Here, we have,

\begin{align*} (I-F)^{-1} = \begin{pmatrix} & S & SA & SB & AE & BS & BE \\ S & 1 & 2/3 & 2/3 & 2/3 & 1/3 & 1/3 \\ SA & 0 & 1 & 0 & 1 & 0 & 0 \\ SB & 0 & 1/3 & 4/3 & 1/3 & 2/3 & 2/3 \\ AE & 0 & 0 & 0 & 1 & 0 & 0 \\ BS & 0 & 2/3 & 2/3 & 2/3 & 4/3 & 1/3 \\ BE & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix} \end{align*}

For example, $F_{0,5} = 1/3$ is the expected number of times we move in one step from state $B$ to state $E$ (in the original Markov chain) given that we start from state $S$.

We also need a distance vector for the distances between states: $S\; (=0),SA,SB,AE,BS,BE$:

\begin{align*} D = \begin{pmatrix} 0 \\ 20 \\ 1 \\ 1 \\ 1 \\ 5 \\ \end{pmatrix} \end{align*}

The expected total distances covered is then given by,

\begin{align*} (I-F)^{-1} \times D = \begin{pmatrix} 50/3 \\ 21 \\ 37/3 \\ 1 \\ 53/3 \\ 5 \\ \end{pmatrix} \end{align*}

Since we start at state $S$, the value required is $\dfrac{50}{3}$.