Proving that in an homogeneous DTMC $i\leftrightarrow j$ and $j$ is recurrent $\implies f_{ij}=1$

151 Views Asked by At

I'm struggling to understand the proof of the following theorem:

Theorem: Let $\big\{\mathsf X\big\}_{n\in\mathbb N}$ be an homogeneous DTMC, P its transition matrix and $\mathsf S$ its states space; let $i,j\in\mathcal S\mid i \leftrightarrow j$, then $f_{ij} = 1$

Proof: Let $m\in\mathbb N\;\lvert\; P^{(m)}_{ji} > 0$. Then

$$1 = f_{jj} = \mathbb P\big( \mathsf X_n = j \text{ for infinitely many } n\mid \mathsf X_0 = j \big) \\= \mathbb P\big( \mathsf X_n = j \text{ for some } n > m+1 \mid \mathsf X_0 = j \big) = \cdots$$

And so on, I won't report the remaining part cause this one right here is the only one giving me a bad headache.

Now, why is the third equality true? I mean, to me the events in the probability argument sound totally different. I saw alternative proves involving renewal processes but we didn't study them in our class and aren't part of our program, I need to understand this one right here.

EDIT: Supposedly the two arguments should be equivalent, i.e.

$$\mathsf X_n = j \text{ for infinitely many } n \iff \mathsf X_n = j \text{ for some } n > m+1$$

The "$\implies$" part is obvious, but I can't see why the "$\impliedby$" holds.

We saw and proved a theorem asserting that if a state $k\in\mathsf S$ is recurrent then paths starting in $k$ are a.c visiting $k$ at least once ($f_{kk} = 1$) and then if $k$ is recurrent the paths starting in $k$ return a.c. infinitely many times in $k$, may this have something to deal with it?

2

There are 2 best solutions below

13
On BEST ANSWER

$\def\F{\mathscr{F}}\def\peq{\mathrel{\phantom{=}}{}}$The property used in the equality is actually:\begin{gather*} P(X_n = j \text{ for infinitely many } n \mid X_0 = j) = 1\\ \Longleftrightarrow P(X_n = j \text{ for some } n > m + 1 \mid X_0 = j) = 1. \end{gather*} For $\Longrightarrow$, it is obvious by$$ \{X_n = j \text{ for infinitely many } n\} \subseteq \{X_n = j \text{ for some } n > m + 1\}. $$ For $\Longleftarrow$, define$$ τ_0 = 0, \quad τ_{n + 1} = \inf\{k > τ_n + m + 1 \mid X_k = j\}, \quad \forall n \geqslant 0 $$ then$$ \{X_n = j \text{ for infinitely many } n\} = \bigcap_{n = 1}^∞ \{τ_n < ∞\} = \lim_{n → ∞} \bigcap_{k = 1}^n \{τ_k < ∞\}. $$ So it suffices to prove by induction on $n$ that$$ P(τ_1 < ∞, \cdots, τ_n < ∞ \mid X_0 = j) = 1. \quad \forall n \geqslant 1 $$ For $n = 1$,$$ P(τ_1 < ∞ \mid X_0 = j) = P(X_n = j \text{ for some } n > m + 1 \mid X_0 = j) = 1. $$ Now assume that it holds for $n$. Denote $A_k = \{τ_1 < k - m - 1, \cdots, τ_{n - 1} < k - m - 1\}$ for $k \geqslant m + 2$, then$$ \{τ_1 < ∞, \cdots, τ_n < ∞\} = \bigcup_{k = m + 2}^∞ (A_k \cap \{τ_n = k\}) $$ and\begin{align*} &\peq P(τ_1 < ∞, \cdots, τ_n < ∞, τ_{n + 1} < ∞ \mid X_0 = j)\\ &= \sum_{k = m + 2}^∞ P(A_k \cap \{τ_n = k\} \cap \{τ_{n + 1} < ∞\} \mid X_0 = j)\\ &= \sum_{k = m + 2}^∞ \sum_{l = k + m + 2}^∞ P(A_k \cap \{τ_n = k\} \cap \{τ_{n + 1} = l\} \mid X_0 = j). \tag{1} \end{align*} Note that $\{τ_n = k\} \subseteq \{X_k = j\}$, so\begin{align*} &\peq P(A_k \cap \{τ_n = k\} \cap \{τ_{n + 1} = l\} \mid X_0 = j)\\ &= P(A_k \cap \{τ_n = k\} \cap \{X_k = j\} \cap \{τ_{n + 1} = l\} \mid X_0 = j)\\ &= P(A_k \cap \{τ_n = k\} \cap \{X_k = j\} \mid X_0 = j)\\ &\peq · P(τ_{n + 1} = l \mid A_k \cap \{τ_n = k\} \cap \{X_k = j\}). \end{align*} By the Markov property,\begin{align*} &\peq P(τ_{n + 1} = l \mid A_k \cap \{τ_n = k\} \cap \{X_k = j\}) = P(τ_{n + 1} = l \mid X_k = j)\\ &= P(X_{k + 1} ≠ j, \cdots, X_{l - 1} ≠ j, X_l = j \mid X_k = j)\\ &= P(X_1 ≠ j, \cdots, X_{l - k - 1} ≠ j, X_{l - k} = j \mid X_0 = j)\\ &= P(τ_1 = l - k \mid X_0 = j), \end{align*} combining with$$ P(A_k \cap \{τ_n = k\} \cap \{X_k = j\} \mid X_0 = j) = P(A_k \cap \{τ_n = k\} \mid X_0 = j) $$ and the induction hypothesis yields\begin{align*} (1) &= \sum_{k = m + 2}^∞ \sum_{l = k + m + 2}^∞ P(A_k \cap \{τ_n = k\} \mid X_0 = j) P(τ_1 = l - k \mid X_0 = j)\\ &= \sum_{k = m + 2}^∞ \sum_{l = m + 2}^∞ P(A_k \cap \{τ_n = k\} \mid X_0 = j) P(τ_1 = l \mid X_0 = j)\\ &= \left( \sum_{k = m + 2}^∞ P(A_k \cap \{τ_n = k\} \mid X_0 = j) \right) \left( \sum_{l = m + 2}^∞ P(τ_1 = l \mid X_0 = j) \right)\\ &= P(τ_1 < ∞, \cdots, τ_n < ∞) P(τ_1 < ∞ \mid X_0 = j)\\ &= P(τ_1 < ∞, \cdots, τ_n < ∞) = 1. \end{align*} End of induction.

2
On

The point of the third equality is that if you a.s. return once, then you a.s. return infinitely many times, because of the Markov property. This equality would not be true if the probability to return once were merely positive and not 1.