Discrete null recurrent Markov chain implies that $\lim_{n\to \infty} P(X_n=j) = 0$

112 Views Asked by At

In the book “Markov Chains” (by Norris), in Theorem 1.8.5 the author proves that if the chain is aperiodic irreducible and null recurrent, then $\lim_{n \to \infty} P(X_n = j) = 0$. I’m having some trouble understanding one of the steps in the proof.

First, since the chain is null recurrent, then, for $T_j$ equal the hitting time at state $j$, $$ \sum^\infty_{k=0}P(T_j > k\mid X_0=j) = +\infty $$ Given $\epsilon >0$, choose $K$ such that $$ \sum^{K-1}_{k=0}P(T_j >k\mid X_0=j) \geq 2/\epsilon $$

The following steps are the ones that I don’t understand.

The author states that for $n\geq K-1$ $$ 1\geq \sum_{k=n-K+1}^n P(X_k =j) P(T_j >n-k\mid X_0=j)= \sum_{k=0}^{K-1} P(X_{n-k}=j) P(T_j >k\mid X_0=j) $$ So $P(X_{n-k}=j)\leq \epsilon/2$ for some $k\in\{0,...,K-1\}$.

I don’t understand from where we get neither of these two inequalities. If someone could clarify, I would really appreciate.

1

There are 1 best solutions below

3
On BEST ANSWER

The first inequality $1\geq \sum_{k=n-K+1}^n P(X_k =j) P(T_j >n-k\mid X_0=j)$ follows from the fact that the right hand is actually a probability. It's the probability of hitting $j$ at some time $k$ for the first time and then not returning to $j$ before a time $n-k$ has passed. The sum is over disjoint sets since we consider the first hitting time and the return time in a certain time interval.

For the second inequality, it's because we couldn't have $P(X_{n-k}=j) > \epsilon/2$ for all $k \in \{0, \dots, K-1\}$ since that would mean we have:

$1 \geq \sum_{k=0}^{K-1} P(X_{n-k}=j) P(T_j >k\mid X_0=j)>\sum_{k=0}^{K-1} \frac{\epsilon}{2} P(T_j >k\mid X_0=j) \geq \frac{\epsilon}{2} \frac{2}{\epsilon}=1$

where we have used your second inequality: $\sum^{K-1}_{k=0}P(T_j >k\mid X_0=j) \geq 2/\epsilon$.

We have reached $1>1$ which is absurd so we must have $P(X_{n-k}=j)\leq \epsilon/2$ for some $k$.