Convergence of empirical average of Markov chain from transient class

407 Views Asked by At

I am trying to get an intuition of how to understand the limit of the empirical average $$\frac1n\sum_{i=1}^nX_i\tag{$\ast$}$$ of some Markov chain $(X_n)_n$ with transition matrix $P$ (let's assume finite dimensional for simplicity).

Let $C_1,\ldots,C_m$ be the communicating classes of $(X_n)$.

If the process starts from recurrent (and thus closed) class $C_i$, then the Ergodic Theorem (see Thm 1.10.2 here) gives the answer to the convergence of $(*)$: since the class is closed we may as well consider $X_n$ as a Markov chain whose states are in $C_i$, and this (sub)Markov chain is recurrent irreducible.

If, however, the process starts from a transient (hence open) class, then I don't understand what the behaviour of $(*)$ is. With probability 1, the chain will end up in one of the recurrent classes, but I expect that the distribution of $(*)$ will depend on which class the chain lands in. Is there a general result/framework to understand the limit in this case, similar to the Ergodic theorem for dealing with the general case?