For ergodic Markov chains, when does $\lim_{N\to\infty} \mathbb{E}[\sum_{n=1}^{N}f(X_n)] - N\mu(f)$ exist

55 Views Asked by At

For an ergodic Markov chain (process would be even better) $X_{n}$ with stationary distribution $\mu$, under which conditions does $$ L:=\lim_{N\to\infty} \mathbb{E}[\sum_{n=1}^{N}f(X_n)] - N\mu(f) $$ exist? In the other direction, do you know of examples where this limit does not exist?


Ergodicity says $$ \mathbb{E}[\frac{1}{N}\sum_{n=1}^{N}f(X_n)] - \mu(f) \to 0. $$

If the limit $L$ above exists, this could be refined to $$ \mathbb{E}[\frac{1}{N}\sum_{n=1}^{N}f(X_n)] - \mu(f) = \frac{L}{N} + \mathcal{o}(\frac{1}{N}). $$ If we denote by $\mu_{n}$ the distribution of $X_n$, we have $$ \mathbb{E}[\sum_{n=1}^{N}f(X_n)] = N \mu(f) + \sum_{n=1}^{N} (\mu-\mu_n)(f) $$ and my question can be rephrased as

Under which conditions on the Markov chain $(X_n)$ and the function $f$ is $(\mu-\mu_n)(f)$ summable? In the other direction, do you know of examples where it is not summable?


I know that for many Markov chains we have $\mu_n =\mu+ \mathcal{O}(e^{-cn})$. I'm interested in theory for cases where the convergence is not geometric but still manageable. This question is therefore maybe a reference request or maybe can just be answered by a simple class of examples with easily tunable convergence properties.

1

There are 1 best solutions below

0
On

An important class of examples is the following:

Let $X$ be a Markov chain with state space $\mathcal{X}$ and $P$ its Markov operator. Suppose that $X$ is aperiodic and $f$-regular, i.e. there are functions $f\geq 1$ and $V\geq 0$, a Borel set $C$ and a constant $b<\infty$ such that $$\Delta V:=PV-V\leq -f+b\mathbb{I}_C,$$ where $\mathbb{I}$ denotes the indicator function. This is called drift condition (V3) in [1].

Suppose that $X$ is aperiodic and that $\mu(V)<\infty$. Then for any function $g$ satisfying $|g|\leq f$ we have some $R>0$ such that $$\sum_{k=0}^{\infty}\left|(Pg)(x)-\mu(g)\right|\leq R(V(x)+1), \quad x\in\mathcal{X}.$$ Thus the sum $$\hat g(x)=\sum_{k=0}^{\infty}\left((Pg)(x)-\mu(g)\right)$$ is absolutely convergent and satisfies $|\hat g|\leq R(V+1)$. You can therefore also get a bound on $\mu_0(\hat g)=E[\hat g(X_0)]$.

See Section 17.4 in [1] for generalizations and further details.

[1] Meyn, S., Tweedie, R. L., & Glynn, P. W. (2009). Markov Chains and Stochastic Stability. Cambridge University Press.