How to simulate visits to a transient state of a Markov chain.

389 Views Asked by At

Consider a discrete-parameter Markov chain $\{X_n, n ≥ 0\}$ with state space $E$, transition probability matrix $P$ and initial-state probabilities $p(0)$ given by $E = \{0, 1, 2, 3\}$,

P = $\begin{bmatrix}0 & 1 & 0 & 0\\1 & 0 & 0 & 0 \\0 & 0 & 1/2 & 1/2 \\1/4 & 1/4 & 1/4 & 1/4\end{bmatrix}$

and $p(0) =(0\;\; 0\;\; 0\;\; 1)$. How can one simulate an observation $X_{10}$ of the chain at time $n = 10$?

1

There are 1 best solutions below

2
On BEST ANSWER

For simplicity in simulation, I have changed the sample space to $E = \{1,2,3,4\}.$ With this notation, notice that there will be a last visit to 'transient' state 4. On each visit to state 4 there is a 50-50 chance of movement to state 1 or 2, and no possibility of return to state 4 from there. So by simulating $X_{10}$ repeatedly, you're checking how likely the chain can still visit state 4 after the tenth transition, and how likely it is that the chain has gotten 'absorbed' into 'persistent' states 1 and 2 by then.

In order to simulate the first ten $transitions$ of the chain repeatedly as required, you need to know how to do the simulation once. Here is such a simulation using R statistical software. We begin by entering the transition matrix $P$ and then using it for simulation. (The transition matrix $P$ can be written on a single line, but it is easier to visualize as entered below.

 P = (1/4)*matrix(c(0,4,0,0,
                    4,0,0,0,
                    0,0,2,2,
                    1,1,1,1), nrow=4, byrow=T)

 m = 11  # number of steps simulated
 x = numeric(m); x[1] = 4  # null 11-vector; chain starts in state 4.
 for (i in 2:m) {
   x[i] = sample(1:4, 1, prob=P[x[i-1],])   # applicable row of P used
}
x
## 4 2 1 2 1 2 1 2 1 2 1

Two additional runs give

## 4 3 4 3 3 4 1 2 1 2 1
## 4 4 4 2 1 2 1 2 1 2 1

Now, for $B = 100,000$ simulations of the state visited after 10 transitions, we wrap an outer loop around the program above.

 P = (1/4)*matrix(c(0,4,0,0,
                    4,0,0,0,
                    0,0,2,2,
                    1,1,1,1), nrow=4, byrow=T)
 B = 100000;  PATH = matrix(0, nrow=B, ncol=11)
 for (j in 1:B)  {
   m = 11  # number of steps simulated
   x = numeric(m); x[1] = 4  # (initially null) 11-vector; chain starts in state 4.
   for (i in 2:m) {
     x[i] = sample(1:4, 1, prob=P[x[i-1],])   # applicable row of P used for sim
   }
 PATH[j,] = x
 }
 table(PATH[,11])/B

 ##       1       2       3       4 
 ## 0.48006 0.48189 0.01855 0.01950 
 P10[4,]
 ## 0.48122883 0.48122883 0.01877117 0.01877117

The last row of $P^{10}$ shown above is found by matrix multiplication (code not shown). As suggested in the Comments by @Ian and @user230329 there is very good agreement with the simulation results.

An advantage of capturing all of the steps visited in the matrix PATH is that a similar comparison can be made for results after four transitions:

 table(PATH[,5])/B

 ##       1       2       3       4 
 ## 0.39295 0.39473 0.10562 0.10670 
 P4[4,]
 ## 0.3945313 0.3945313 0.1054688 0.1054688

Here we see that already after only just a few steps the chain has more likely than not moved from the transient states to the persistent ones.

My guess is that the reason you were asked to do some simulation is to develop intuition how the chain moves. Perhaps soon in your continuing study of Markov chains you will see analytic formulas for the mean time spent in various transient states before 'absorption' into the persistent states.