Consider a Markov process on the state space $S=\{1,2,3\}$ where transitions between states are described by the transition matrix $$P = \begin{pmatrix} p_{11} & p_{1,2} & 0 \\ p_{2,1} & p_{2,2} & p_{2,3} \\ 0 & p_{3,2} & p_{3,3} \end{pmatrix}$$ where $P$ is a column stochastic matrix. I want to describe the Markov process on the state space $S^\prime = \{1 \ \& \ 2,3\}$ where we are only interested in the transition probabilities between points from the states $1 \ \& \ 2$ to $3$ and vice versa.
In one step, due to how $P$ is defined, state $1$ cannot visit state $3$. However, by looking at the matrix $$P^2= \begin{pmatrix}p_{1,1}^2+p_{1,2}p_{2,1}&p_{1,1}p_{1,2}+p_{1,2}p_{2,2}&p_{1,2}p_{2,3}\\ p_{1,1}p_{2,1}+p_{2,1}p_{2,2}&p_{1,2}p_{2,1}+p_{2,2}^2+p_{2,3}p_{3,2}&p_{2,2}p_{2,3}+p_{2,3}p_{3,3}\\ p_{2,1}p_{3,2}&p_{2,2}p_{3,2}+p_{3,2}p_{3,3}&p_{2,3}p_{3,2}+p_{3,3}^2\end{pmatrix}$$ we can determine the transition probabilities of points starting in state $1$ and transitioning to $3$ after $2$ iterations. I was wondering if given the above, one could rewrite this process on a state space $S^\prime$. Here the new matrix $P^\prime$ would be a $2\times 2$ matrix depending on the probabilities appearing in the matrix $P$.