I have the following Markov Chain and am trying to evaluate the probability that the Chain reaches state 4 before it returns to state 1, given it starts in state 1. I've seen many typical problems with constant probabilities of p and q, but I am not sure how to approach this style of question.
I have simplified the problem for any readers: $q_x=1-p_x$
$ \left( \begin{array} qq_1 & p_1 & 0&0 \\ q_2 & 0 & p_2&0 \\ q_3 & 0 & 0&p_3 \\ q_4 & 0 & 0&0\\ \end{array} \right)$
Would be grateful for any help, thank you!
First you should deal with the fact that your process starts out on the "boundary" and can remain there with positive probability. Thus the probability to hit $1$ first is $1-2^{-x}$ plus $2^{-x}$ times the probability to hit $1$ first starting from $2$. So we have reduced the problem to finding the probability to hit $1$ first starting from $2$.
There is a standard recipe for this. It works as follows. Define $\tau=\min \{ n>0 : X_\tau \in \{ 1,4 \} \}$ and $q(y)=P(X_\tau = 1 | X_0=y)$. (Note that $\tau$ cannot be zero, which is why we had to do the first step "by hand" in the previous paragraph.) Then by conditioning on the first step, we find that $(Lq)(y)=0$ for $y \in \{ 2,3 \}$,$q(1)=1$,$q(4)=0$. Here $L$ is called the generator, which in discrete time is $P-I$ where $P$ is the transition matrix and $I$ is the identity.
I write in this general form because this form works in discrete time+discrete space, continuous time+discrete space (where the generator is again a matrix) and continuous time+continuous space (where the generator is a differential operator).