The sums over RVs between two return times are independent for a Markov chain

365 Views Asked by At

Let $X_0,X_1,...,X_n,...$ be an irreducible Markov chain with finite state space. Define $τ_{x,0}^+=0$, and $τ_{x,k}^+=\min\{t:t>τ_{x,k-1}^+,X_t=x\}$. In plain words, $τ_{x,k}^+$ is the time of the $k$th return to $x$. Let $Y_k=\sum_{t=\tau_{k-1}^+}^{\tau_k^+-1}X_t$, which is the sum of $X_t$ between the $(k-1)$th and the $k$th visit of $x$. The question is to show

$Y_k$ where $k=1,2,...$ are i.i.d. random variables.

Anyone can help prove the above statement? Thank you!


PS: The previous version of this question and a possibly related material is posted below.

Given a finite-length Markov chain $X_0,X_1,...,X_n$ with finite state space, define a random variable $\tau$ as stopping time if event $\{\tau = t\}$ can be determined by $X_0,X_1,...,X_t$ for any $0 \le t\le n$. The question is to show

  1. $X_0,X_1,...,X_{\tau - 1}$ are independent from $X_{\tau}, X_{\tau + 1}, ..., X_{n}$.
  2. $\sum_{t=0}^{\tau-1} X_t$ and $\sum_{t=\tau}^n X_t$ are i.i.d.

enter image description here

1

There are 1 best solutions below

0
On BEST ANSWER

I have updated the answer to a full solution. The solution almost only uses elementary probability calculation without any heavy machinery. So I think it is most understandable.

PART I

For the "identically distributed" part, it follows from the strong Markov property stated in your material.

Let ${I_k} = \tau _{x,k}^ + - \tau _{x,k - 1}^ + $, then for every $k=1,2,…$, we have

${\Bbb P}\left\{ {{I_k} = m,{Y_k} = s|\tau _{x,k - 1}^ + < \infty } \right\} = {{\Bbb P}_{{{\bf{\delta }}_x}}}\left\{ {{X_m} = x,{X_{m - 1}} \ne x, \ldots ,{X_1} \ne x,\mathop \sum \limits_{t = 0}^{m - 1} {X_t} = s} \right\}$

for every $m∈\Bbb N^+$ and $s$. Thus

${\Bbb P}\left\{ {{Y_k} = s|\tau _{x,k - 1}^ + < \infty } \right\} = \mathop \sum \limits_{m \in {{\Bbb N}^ + }} {{\Bbb P}_{{{\bf{\delta }}_x}}}\left\{ {{X_m} = x,{X_{m - 1}} \ne x, \ldots ,{X_1} \ne x,\mathop \sum \limits_{t = 0}^{m - 1} {X_t} = s} \right\}$

from which we easily see $Y_k$ is identically distributed.

PART II

The "strong Markov property" stated in your material, I think, is far from sufficient to prove independence. The form of strong Markov property that we need is

${\Bbb P}\left( {F\bigcap H{\text{|}}\tau < \infty ,{X_\tau } = v} \right) = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v} \right){\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v} \right)$

where $\tau$ is chosen as the present time, $F$ is any future event and $H$ is any historical event. If a time $t$ is chosen as the present time, then any event determined by $X_{t+1},X_{t+2},...$ is a future event, and any event determined by $X_0,X_1,...,X_{t-1}$ is a historical event.

The meaning of the strong Markov property is clear: information that "$\tau<\infty$ and $X_{\tau}=v$" is sufficient to decouple any future event and any historical event. We don't need other information like what exactly $\tau$ is.

The proof I have is too long to be posted here. Just assume this is true. Don't hurry. We also need to extend the above strong Markov property to multiple stopping times.

First we prove a lemma that providing additional future and historical events as condition does not break independence. As before let $τ$ be a stopping time, and let $F$ and $H$ be any future event and historical event. In addition, let $F_0,H_0$ be another arbitrary future event and historical event, then

${\Bbb P}\left( {F,H{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right) = \frac{{{\Bbb P}\left( {F,H,\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right)}}{{{\Bbb P}\left( {\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right)}} = \frac{{{\Bbb P}\left( {F,H,{F_0},{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right)}}{{{\Bbb P}\left( {{F_0},{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right)}} = \frac{{{\Bbb P}\left( {F,{F_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right){\Bbb P}\left( {H,{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right)}}{{{\Bbb P}\left( {{F_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right){\Bbb P}\left( {{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right)}} = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right){\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v,{H_0}} \right)$

Similarly

${\Bbb P}\left( {F,H{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right) = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right){\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v} \right)$

${\Bbb P}\left( {F,H{\text{|}}\tau < \infty ,{X_\tau } = v,{H_0}} \right) = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v} \right){\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v,{H_0}} \right)$

It follows that

${\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right) = \frac{{{\Bbb P}\left( {F,\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right)}}{{{\Bbb P}\left( {\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right)}} = \frac{{{\Bbb P}\left( {F,{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right)}}{{{\Bbb P}\left( {{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right)}} = \frac{{{\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right){\Bbb P}\left( {{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right)}}{{{\Bbb P}\left( {{H_0}{\text{|}}\tau < \infty ,{X_\tau } = v} \right)}} = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right)$

meaning that providing additional future event $F_0$ in the condition does not break the independence between any future event and historical event. Similarly we can calculate

${\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right)={\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v,{H_0}} \right)$

It further follows that

${\Bbb P}\left( {F,H{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right) = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0}} \right){\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v,{H_0}} \right) = {\Bbb P}\left( {F{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right){\Bbb P}\left( {H{\text{|}}\tau < \infty ,{X_\tau } = v,{F_0},{H_0}} \right)$

by which we conclude that providing any additional future event $F_0$ and historical event $H_0$ does not break the independence between any future event and historical event. From another perspective, if the conditions can be broken into $τ<∞,X_τ=v$, a future event $F_0$ and a historical event $H_0$, then any future event $F$ and historical event $H$ are independent.

Now given ${\tau _0} = 0$ and multiple stopping times ${\tau _1} < \ldots < {\tau _k}$ , let $_$ be an event determined by ${X_{{\tau _{j - 1}}}},{X_{{\tau _{j - 1}} + 1}}, \ldots ,{X_{{\tau _j} - 1}}$ for $j = 1,2, \ldots ,k$, , let $_{+1}$ be an event determined by ${X_{{\tau _k}}},{X_{{\tau _k} + 1}}, \ldots $. In addition, let $K = \left\{ {{X_{{\tau _1}}} = {v_1}, \ldots ,{X_{{\tau _k}}} = {v_k}} \right\}$. We will prove the following strong Markov property for multiple stopping times.

${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^k {H_j}|{\tau _k} < \infty ,K} \right) = \mathop \prod \limits_{j = 1}^k {\Bbb P}\left( {{H_j}|{\tau _k} < \infty ,K} \right)$

First choose $τ_k$ as the present time, then $\mathop \bigcap \limits_{j = 1}^k {H_j}$ an historical event, and ${H_{k + 1}}$ is a future event. Also note event

$\left\{ {{\tau _k} < \infty } \right\}\bigcap K = \left\{ {{\tau _k} < \infty ,{X_{{\tau _k}}} = {v_k}} \right\}\bigcap \left\{ {{X_{{\tau _1}}} = {v_1}, \ldots ,{X_{{\tau _{k - 1}}}} = {v_{k - 1}}} \right\}$

where $\left\{ {{X_{{\tau _1}}} = {v_1}, \ldots ,{X_{{\tau _{k - 1}}}} = {v_{k - 1}}} \right\}$ is a historical event, then

${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^{k + 1} {H_j}|{\tau _k} < \infty ,K} \right) = {\Bbb P}\left( {{H_{k + 1}}|{\tau _k} < \infty ,K} \right){\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^k {H_j}|{\tau _k} < \infty ,K} \right)$

Likewise, to solve ${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^k {H_j}|{\tau _k} < \infty ,K} \right)$, choose $τ_{k-1}$ as the present time, and it is easy to verify that $\{τ_k<∞\}⋂K$ can be broken into $τ_{k-1}<∞,X_{τ_{k-1} }=v$, a future event and a historical event. Thus

${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^k {H_j}|{\tau _k} < \infty ,K} \right) = {\Bbb P}\left( {{H_k}|{\tau _k} < \infty ,K} \right){\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^{k - 1} {H_j}|{\tau _k} < \infty ,K} \right)$

Iteratively, we arrive at ${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^{k+1} {H_j}|{\tau _k} < \infty ,K} \right) = \mathop \prod \limits_{j = 1}^{k+1} {\Bbb P}\left( {{H_j}|{\tau _k} < \infty ,K} \right)$.

Now the "independent" part of your problem is a piece of cake. The $k$th return times are nothing more than the stopping times $\tau_k$ discussed above, and finite state space and irreducibility guarantees they are finite. Events $\{Y_k=s_k\},k=1,2,...$ are nothing more than events determined by ${X_{{\tau _{k - 1}}}},{X_{{\tau _{k - 1}} + 1}}, \ldots ,{X_{{\tau _k} - 1}}$. Thus events $\{Y_k=s_k\},k=1,2,...$ are independent for any $s_k,k=1,2,...$. $Y_k,k=1,2,...$ are discrete RVs, then $Y_k,k=1,2,...$ are independent.