How to model a markov chain that randomly switches between two transition matrices?

715 Views Asked by At

Let's say I have a discrete time, time-homogeneous Markov chain $X = \{X_{1}, \dots , X_{n}\}$ with state space $S= \{1,2,3\}$ and a transition matrix:

\begin{bmatrix} .4 & .3 & .3\\ .3 & .5 & .2\\ .8 & .1 & .1 \end{bmatrix}

However, I want to model the process in the following way: Lets say we are currently visiting some state at $X_{10}$. Before we go to the next state at $X_{11}$, there is a small probability $p$ that it will first break out of this transition matrix and follow a new transition matrix:

\begin{bmatrix} 0 & .5 & .5\\ .5 & 0 & .5\\ .5 & .5 & 0 \end{bmatrix}

Then, once the current state is $X_{11}$, we return to the original transition matrix but again before we go to $X_{12}$ there is again the small probability $p$ of first breaking out to the second matrix, and so on and so on endlessly.

Is there a way to model this behavior mathematically? Practically, while I can program this behavior, I cannot use n-step transition matrices because they are meaningless. The n-step transition of the first matrix alone wouldn't account for the ability of the process to break out and switch chains.

My guess is that I have to somehow weight the first matrix using the second which I have no idea how to do, but the probability $p$ of breaking out is a nuisance. Also, if it is of any concern, the actual matrix I am trying to model is size 256 x 256, so ultimately the solution needs to be able to be run by a computer.

1

There are 1 best solutions below

0
On BEST ANSWER

Yes, it can be done reasonably simply. All you have to do is to carefully consider the true transition probabilities.

Consider, for instance, the probability of going from state 1 to state 2 in a single step. If the "original" matrix is used, then this probability is $0.3$. If the switch happens, then this probability is $0.5$. Regard this as a holistic process: the probability of transitioning from state 1 to state 2 is therefore \begin{align*} &P(1 \rightarrow 2 \mid \text{no switch}) \cdot P(\text{no switch}) + P(1 \rightarrow 2 \mid \text{switch}) \cdot P(\text{switch}) \\ &\qquad = 0.3(1-p) + 0.5 p. \end{align*} Similar comments apply when transitioning between any two states, so it shouldn't be hard to convince yourself that the transition matrix should be $(1-p)A + pB$, where $A$ is the "default" transition matrix and $B$ is the second transition matrix. From here, you have a good old-fashioned Markov chain with a new transition matrix.