I have a random walk problem that has n+m states, (1, 2, …, d, …, n, …, n+m). The agent is currently at the original state 1. For the next move, it has a probability of p1 moving to state 2, and a probability of 1-p1 staying at state 1.
When the agent is at state i (for all i that belong to {2, 3, ..., d-1}), it always has a probability of p1 moving to i+1, a probability of p2 moving back to i-1, and a probability of 1-p1-p2 staying at state i.
When the agent is at state d, its probability to move back to state d-1 is p2. At this state, the agent can diverge to two different routes: One is from state d to state n, and the other is from state d to state n+1 to state n+m. At this state, the agent’s probability of moving to d+1 switches to d2, and his/her probability of moving to n+1 is d1, and probability of staying at state d is 1 – p1 – 2*p2.
For all states i between d+1 and n-1, it always has a probability of p2 moving to i+1, a probability of p1 moving back to i-1, and a probability of 1-p1-p2 staying at state i. For state n, it has a probability of p1 moving back to n-1, and a probability of 1-p1 staying at state n.
The route between state n+1 and state n+m has similar transition probability with the route between state 1 and state d. State n+m is an absorbing state. I have uploaded a picture that draws the Markov chain of this random walk problem.
Now, my question is that: if the agent starts at state 2, what is his/her probability of visiting state n before he/she visits state 1 or state n+m.