Convergence of average probability in Markov chain

374 Views Asked by At

Let $P$ be a stochastic matrix, and let $v_0$ be a probability vector for the initial distribution. So $v_k=P^kv_0$ is the probability vector for the distribution at time $k$, and $v_{ki}$ is the probability of being in the $i$th state at time $k$.

Is it true that for any fixed $i$, the sequence $$\left(\frac{\sum_{j=1}^kv_{ji}}{k}\right)_{k=1}^\infty$$ converges? Is there a result that guarantees this? I've looked at the Perron-Frobenius theorem, but it doesn't seem to imply anything about this sequence.

Also, if we let $T_k(i,j)$ be the random variable giving the number of transitions from $i$ to $j$ between times $1$ and $k$, then does $T_k(i,j)/k$ converge as well?

1

There are 1 best solutions below

2
On BEST ANSWER

This is a consequence / special case of the ergodic theorem for Markov chains. If $N_k(i)$ is the random variable giving the number of visits to $i$ between times $1$ and $k$, the ergodic theorem asserts that $N_k(i)/k$ converges almost surely. Since this ratio is bounded by $1$, the dominated convergence theorem says that its expectation also converges, and that is precisely the quantity you desire. (Let $\chi_j(i) = 1_{\{X_i = j\}}$ be the indicator of the event that we are in state $i$ at time $j$, and note that $N_k(i) = \sum_{j=1}^k \chi_j(i)$; then use linearity of expectation.)

You can see Durrett's Essentials of Stochastic Processes, Theorem 1.9, for an elementary proof, or his Probability: Theory and Examples, Theorem 6.6.1, for a more measure-theoretic argument. These are under the assumption that the chain is irreducible, but you can reduce to that case by writing $v_0$ as a convex combination of vectors supported on irreducible sets, with a little extra work if there are transient states.