Markov chain w/ reflecting boundaries: Computed expected number of times in a given state in the next k steps

307 Views Asked by At

I am dealing with an integer-valued, one-dimensional Markov process a transition matrix that looks like:

$$P = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ p & 0 & (1-p) & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & p & 0 & (1-p) & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & p & 0 & (1-p) & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/2 & 0 & 1/2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & (1-p) & 0 & p & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & (1-p) & 0 & p & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & (1-p) & 0 & p \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix}$$

I am playing a game where if I win by correctly guessing $k$ instances where the state will decrease in the next $n$ steps. For large $n$, this doesn't seem challenging to me, as I can use the stationary distribution, and for $k = 1$ this also seems relatively straightforward. However, I am having difficulty envisioning a concise algorithm for how to correctly bet on when to make my $k$ guesses if $k > 1$ and $n$ is small enough that it has not converged to the stationary distribution.

For example, I know that I am going to take 10 steps in the Markov process and I am currently at state 7 and I also know that I have two guesses to make. Assuming that p at least 1/2, then my only opportunity to improve is to make it into state 9 (where I am guaranteed a correct guess). So the question then is, what is the probability that I get into state 9 at least twice within the next 10 steps given that I am starting in state 7.