Let $\{X_n\}$ be an irreducible Markov chain on a discrete state space $\mathbb{N}$, that has a stationary distribution $\pi$.
Prove or disprove : with probability $1$: $$\lim_{n\rightarrow +\infty}\frac{1}{n}(X_1+\ldots+X_n) =\sum_{x\in \mathbb{N}}x~\pi(x).$$
Note. The text I am working on kind of suggests me that this is true. For a proof, I had in my mind to use the Ergodic Theorem $\displaystyle \lim_{n\rightarrow +\infty}\frac{1}{n}(f(X_1)+\ldots+f(X_n)) =\sum_{x\in \mathbb{N}}f(x)~\pi(x)$, which holds for bounded functions $f$ on $\mathbb{N}$, but in my case $f(x)=x$ is not bounded on $\mathbb{N}.$ Should I write the above mean value as a sum $\frac{1}{n}(g(X_1)+\ldots+g(X_n))$ for a suitable bounded function $g$? If yes though, I don't see it.
Thanks in advance!
Yes, that is true regardless of the initial condition. Here is a proof:
Claim:
Suppose $\{X_n\}_{n=1}^{\infty}$ is an irreducible discrete time Markov chain (DTMC) on a finite or countably infinite state space $\mathcal{S}$. Suppose it has a stationary distribution $\pi(x)$ for all $x \in \mathcal{S}$. Then for any nonnegative function $h:\mathcal{S}\rightarrow\mathbb{R}$ and any initial condition $X_1$ we have: $$ \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n h(X_i) = \sum_{x \in \mathcal{S}} h(x)\pi(x) \quad \mbox{(with prob 1)} $$
Proof part 1:
Order the states $\mathcal{S}=\{s_1, s_2, s_3, ...\}$. Let $f_i(n)$ be the relative frequency of being in state $s_i$ in the first $n$ steps. Then for all positive integers $n, i, k$: $$ \frac{1}{n}\sum_{i=1}^nh(X_i) =\sum_{i=1}^{\infty} h(s_i)f_n(i) \geq \sum_{i=1}^k h(s_i)f_n(i) $$ where the inequality uses the fact that $h(\cdot) \geq 0$. Taking a $\liminf$ as $n\rightarrow \infty$ and using the fact that $f_n(i) \rightarrow \pi(s_i)$ (with prob 1) gives: $$\liminf_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n h(X_i) \geq \sum_{i=1}^k h(s_i)\pi(s_i)$$ The above holds for all $k>0$. Taking a limit as $k\rightarrow\infty$ gives: $$\liminf_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n h(X_i) \geq \sum_{i=1}^{\infty} h(s_i)\pi(s_i) = \sum_{x \in \mathcal{S}} h(x)\pi(x) \quad \mbox{(with prob 1)} \quad (1) $$
Proof part 2:
Fix a state $s_1 \in \mathcal{S}$. Define: $$ c = \frac{E\left[\sum_{n=1}^{T_{11}-1} h(X_n) | X_1=s_1\right]}{E[T_{11}]} $$ where $E[T_{11}]$ is the mean recurrence time from state $1$ to state $1$. The value of $c$ is possibly infinity. By renewal theory we know that $$\lim_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n h(X_i) = c \quad (\mbox{with prob 1}) \quad (2) $$ and the above holds regardless of the initial condition. Now imagine starting with a probabilistic intitial condition: $$ P[X_1=x] = \pi(x) \quad \forall x \in \mathcal{S} $$ So we start out in the stationary distribution. Then $P[X_n=x]=\pi(x)$ for all steps $n$ and all $x \in \mathcal{S}$. With this initial distribution, we have $E[h(X_n)] = \sum_{x\in\mathcal{S}} h(x)\pi(x)$ for all time steps $n>0$. So: $$ \liminf_{n\rightarrow\infty} E\left[\frac{1}{n}\sum_{i=1}^n h(X_i)\right] = \sum_{x\in\mathcal{S}} h(x)\pi(x) \quad (3)$$ On the other hand, Fatou's lemma for nonnegative random processes implies: $$ \liminf_{n\rightarrow\infty} E\left[\frac{1}{n}\sum_{i=1}^n h(X_i)\right] \geq E\left[ \liminf_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n h(X_i)\right] = c $$ Substituting equation (3) into the left-hand-side of the above inequality gives: $$ \sum_{x\in\mathcal{S}} h(x)\pi(x) \geq c $$ Using equation (2) in the above gives: \begin{align*} \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^nh(X_i) &\leq \sum_{x\in\mathcal{S}} h(x)\pi(x) \quad \mbox{(with prob 1)} \\ &\leq \liminf_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^nh(X_i) \quad \mbox{(with prob 1)} \end{align*} where the final inequality uses equation (1). This proves the claim.