I have a Markov chain with 4 states $\left(X_1, X_2, X_3, X_4\right)$ and the absorbing state as $\left(X_5\right)$ having the transition probability matrix as
$ M= \begin{bmatrix} 0.60 & 0.20 & 0.15 & 0.03 & 0.02 \\ 0.20 & 0.60 & 0.13 & 0.04 & 0.03 \\ 0.02 & 0.16 & 0.64 & 0.14 & 0.04 \\ 0.06 & 0.12 & 0.23 & 0.54 & 0.05 \\ 0.00 & 0.00 & 0.00 & 0.00 & 1.00 \end{bmatrix} $
Now I am summarizing initial 4 states with 2 states as $Y_1 = \left(X_1, X_2\right), Y_2 = \left(X_3, X_4\right)$
Is there any established method to derive the transition probability matrix for $\left(Y_1, Y_2\right)$ from $M$?
Any pointer will be very helpful.
The random process whose states are $Y_1$ and $Y_2$ is no longer a Markov chain. This means that the transition probabilities depend on the initial state of the process, and so there is no useful notion of a transition matrix.
To see an explicit counterexample, we're going to use an underlying Markov chain that exaggerates this effect by choosing a transition matrix $$ M = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{pmatrix}, $$ which is just the process that deterministically cycles thorugh $X_1 \rightarrow X_2\rightarrow X_3\rightarrow X_4 \rightarrow X_1 \dotsb$
Now letting $(A_n)_n$ and $(B_n)_n$ be the $\{X_i\}$- and $\{Y_i\}$-valued processes respectively, we consider the distribution of $B_3$ if $(B_1, B_2) = (Y_2, Y_2)$ and $(Y_1, Y_2)$ respectively.
First, the only way for it to be possible that $(B_1, B_2) = (Y_2, Y_2)$ is that $(A_1, A_2) = (X_3, X_4)$, in which case it must be that $A_3 = X_1$ so that $B_3 = Y_1$ $$ \Pr(B_3 = Y_1 | B_1 = Y_2, B_2 = Y_2) = 1. $$
On the other hand, the only way for it to be possible that $(B_1, B_2) = (Y_1, Y_2)$ is that $(A_1, A_2) = (X_2, X_3)$, in which case it must be that $A_3 = X_4$ so that $B_3 = Y_2$ $$ \Pr(B_3 = Y_1 | B_1 = Y_2, B_2 = Y_2) = 0. $$
That is, the distribution of $B_n$ conditional on its whole history does not depend only on $B_{n-1}$ and so $(B_n)$ is not a Markov chain.