Expected time between successive visits in a Markov Chain?

1.2k Views Asked by At

This is a pretty basic question and I know the answer is probably really obvious, but I am having trouble reasoning as to why the following is true:

(From my lecture notes):

""" Expected time between consecutive visits: Let hii be the expected time between successive vists by an ergodic Markov chain to state i. Then,

$h_{ii} = 1/\pi_i$.

Since the Markov Chain spends $\pi_i$ fraction of time in state i, it must be that the expected time between successive visits to that state is $1/\pi_i$.

"""

So for example if $\pi_2$ = 1/3, then if the system has been running for a long time and someone from the outside looks in, the probablity of finding a particle in state 2 is 1/3. Then, the expected time between successive visits is 3. That last part is what doesn't make sense to me. Why is it that the expected number of visits to state i is 1/(probability of finding a particle in state i)? How can this be true for any n-state graph?

Any intuition would help me a lot!

2

There are 2 best solutions below

2
On BEST ANSWER

Here’s one way to understand this: Think of “outside observer finds the system in state $i$” as a Bernoulli trial with probability $\mathbf\pi_i$ of success. It’s easy to show that the expected number of trials required to produce a success is $1/\mathbf\pi_i$.

0
On

amd's answer gives some intuition for why this is true but I want to point out that it's not really a geometric distribution since it isn't memoryless.

Consider a very simple Markov chain with states 1 and 2 and transition probabilities of 0.99 for 1$\to$2 and 2$\to$1. By symmetry, the particle will spend 50% of its time in each state, and indeed it typically visits state 1 every other turn.

But this isn't a Bernoulli trial or a geometric distribution: if we have just seen the particle at state 1, then it will be in state 2 on the next turn with 99% probability, not 50%. The trials are not independent.

What's really happening is this: consider the state of the particle across a long time span of length $T$. Suppose it visits state $i$ a total of $N_i$ times during that time. If we measure the spacings between those visits, they will necessarily add up to about $T$ (it would be exactly $T$ if it started and ended at state $i$). So by definition the average spacing between visits to state $i$ is $T/N_i$, or very close to that.

On the other hand, it's very easy to see (by linearity of expectation) that on average, $N = \pi_i T$, since that is simply what the probability describes. This means $h_{ii} \approx T/N_i \approx 1/\pi_i$. As $T\to \infty$ both approximations become sharper so that in the limit we have equality.