Question about random walk markov chain

538 Views Asked by At

For a random walk, let $a$ denote the probability that the markov chain will ever return to state $0$ given that it is currently in state $1$. Because the markov chain will always increase by $1$ with probability $p$ or decrease by $1$ with probability $1-p$ no matter what its current state, note that $a$ is also the probability that the markov chain currently in the state $i$ will ever enter state $i-1$, for any $i$.

I cannot see how the bold sentence can be true, could anyone help me to understand it? Thanks very much.

1

There are 1 best solutions below

3
On BEST ANSWER

The probability of going ever back to a previous state $a$ given that you are in state $s_i$, depends on the transition probabilities (both upward and downward transition probabilities) between all subsequent states $s_j$ and $s_{j+1}$ with $j \geq i$ and the probability of going from state $i$ to state $i-1$.

As an example, let us say you start in state $1$, the different ways that you can reach state $0$ are you go immediatly from state $1$ to state $0$, you can go to state $0$ by going first from state $1$ to state $2$ and then boing back to state $1$ and then going to state $0$, ... . There exist an infinite amount of ways of going from state $1$ to state $0$. Note that all those different possibilities of going to the previous state are all very similar (they use the same transition probabilities) and that in both cases you need to sum up all those different combinations (an infinite sum, see further) that will give you as a result the probability $a$ you are looking for.

Let us view an example in which we depicted to starting states $s_1$ and $s_3$ and we want to know what the probability is that we go back to state $s_0$ and state $s_2$ respectively.

enter image description here

It is obvious that only the transitions in the picture at the right of our current state $s_1$ (top chain in the figure) respectively $s_3$ matter (bottom chain in the figure). The transitions that matter are given the color red.

Both red series of probabilities that matter are entirely the same sum, if we sum them up to infinity. That is why the probability that we ever go back to the previous state are the same in both cases. Both series of probabilities $\sum_{j=1}^{\infty} P(\tau_{j})$ with $\tau_j$ the event representing the probability that you get in the previous state in $j$ steps, will be the same.

A different scenario would have happened when these probabilities would not have been everywhere the same. Then there would be differences in the probability of ever going back to your previous state.