Consider a discrete-parameter Markov chain $\{X_n, n ≥ 0\}$ with state space $E$, transition probability matrix $P$ and initial-state probabilities $p(0)$ given by $E = \{0, 1, 2, 3\}$,
P = $\begin{bmatrix}0 & 1 & 0 & 0\\1 & 0 & 0 & 0 \\0 & 0 & 1/2 & 1/2 \\1/4 & 1/4 & 1/4 & 1/4\end{bmatrix}$
and $p(0) =(0\;\; 0\;\; 0\;\; 1)$. How can one simulate an observation $X_{10}$ of the chain at time $n = 10$?
For simplicity in simulation, I have changed the sample space to $E = \{1,2,3,4\}.$ With this notation, notice that there will be a last visit to 'transient' state 4. On each visit to state 4 there is a 50-50 chance of movement to state 1 or 2, and no possibility of return to state 4 from there. So by simulating $X_{10}$ repeatedly, you're checking how likely the chain can still visit state 4 after the tenth transition, and how likely it is that the chain has gotten 'absorbed' into 'persistent' states 1 and 2 by then.
In order to simulate the first ten $transitions$ of the chain repeatedly as required, you need to know how to do the simulation once. Here is such a simulation using R statistical software. We begin by entering the transition matrix $P$ and then using it for simulation. (The transition matrix $P$ can be written on a single line, but it is easier to visualize as entered below.
Two additional runs give
Now, for $B = 100,000$ simulations of the state visited after 10 transitions, we wrap an outer loop around the program above.
The last row of $P^{10}$ shown above is found by matrix multiplication (code not shown). As suggested in the Comments by @Ian and @user230329 there is very good agreement with the simulation results.
An advantage of capturing all of the steps visited in the matrix
PATHis that a similar comparison can be made for results after four transitions:Here we see that already after only just a few steps the chain has more likely than not moved from the transient states to the persistent ones.
My guess is that the reason you were asked to do some simulation is to develop intuition how the chain moves. Perhaps soon in your continuing study of Markov chains you will see analytic formulas for the mean time spent in various transient states before 'absorption' into the persistent states.