Let $X_0,X_1,...,X_n,...$ be an irreducible Markov chain with finite state space. Define $τ_{x,0}^+=0$, and $τ_{x,k}^+=\min\{t:t>τ_{x,k-1}^+,X_t=x\}$. In plain words, $τ_{x,k}^+$ is the time of the $k$th return to $x$. Let $Y_k=\sum_{t=\tau_{k-1}^+}^{\tau_k^+-1}X_t$, which is the sum of $X_t$ between the $(k-1)$th and the $k$th visit of $x$. The question is to show
$Y_k$ where $k=1,2,...$ are i.i.d. random variables.
Anyone can help prove the above statement? Thank you!
PS: The previous version of this question and a possibly related material is posted below.
Given a finite-length Markov chain $X_0,X_1,...,X_n$ with finite state space, define a random variable $\tau$ as stopping time if event $\{\tau = t\}$ can be determined by $X_0,X_1,...,X_t$ for any $0 \le t\le n$. The question is to show
- $X_0,X_1,...,X_{\tau - 1}$ are independent from $X_{\tau}, X_{\tau + 1}, ..., X_{n}$.
- $\sum_{t=0}^{\tau-1} X_t$ and $\sum_{t=\tau}^n X_t$ are i.i.d.

I have updated the answer to a full solution. The solution almost only uses elementary probability calculation without any heavy machinery. So I think it is most understandable.
PART I
For the "identically distributed" part, it follows from the strong Markov property stated in your material.
Let ${I_k} = \tau _{x,k}^ + - \tau _{x,k - 1}^ + $, then for every $k=1,2,…$, we have
for every $m∈\Bbb N^+$ and $s$. Thus
from which we easily see $Y_k$ is identically distributed.
PART II
The "strong Markov property" stated in your material, I think, is far from sufficient to prove independence. The form of strong Markov property that we need is
where $\tau$ is chosen as the present time, $F$ is any future event and $H$ is any historical event. If a time $t$ is chosen as the present time, then any event determined by $X_{t+1},X_{t+2},...$ is a future event, and any event determined by $X_0,X_1,...,X_{t-1}$ is a historical event.
The meaning of the strong Markov property is clear: information that "$\tau<\infty$ and $X_{\tau}=v$" is sufficient to decouple any future event and any historical event. We don't need other information like what exactly $\tau$ is.
The proof I have is too long to be posted here. Just assume this is true. Don't hurry. We also need to extend the above strong Markov property to multiple stopping times.
First we prove a lemma that providing additional future and historical events as condition does not break independence. As before let $τ$ be a stopping time, and let $F$ and $H$ be any future event and historical event. In addition, let $F_0,H_0$ be another arbitrary future event and historical event, then
Similarly
It follows that
meaning that providing additional future event $F_0$ in the condition does not break the independence between any future event and historical event. Similarly we can calculate
It further follows that
by which we conclude that providing any additional future event $F_0$ and historical event $H_0$ does not break the independence between any future event and historical event. From another perspective, if the conditions can be broken into $τ<∞,X_τ=v$, a future event $F_0$ and a historical event $H_0$, then any future event $F$ and historical event $H$ are independent.
Now given ${\tau _0} = 0$ and multiple stopping times ${\tau _1} < \ldots < {\tau _k}$ , let $_$ be an event determined by ${X_{{\tau _{j - 1}}}},{X_{{\tau _{j - 1}} + 1}}, \ldots ,{X_{{\tau _j} - 1}}$ for $j = 1,2, \ldots ,k$, , let $_{+1}$ be an event determined by ${X_{{\tau _k}}},{X_{{\tau _k} + 1}}, \ldots $. In addition, let $K = \left\{ {{X_{{\tau _1}}} = {v_1}, \ldots ,{X_{{\tau _k}}} = {v_k}} \right\}$. We will prove the following strong Markov property for multiple stopping times.
First choose $τ_k$ as the present time, then $\mathop \bigcap \limits_{j = 1}^k {H_j}$ an historical event, and ${H_{k + 1}}$ is a future event. Also note event
where $\left\{ {{X_{{\tau _1}}} = {v_1}, \ldots ,{X_{{\tau _{k - 1}}}} = {v_{k - 1}}} \right\}$ is a historical event, then
Likewise, to solve ${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^k {H_j}|{\tau _k} < \infty ,K} \right)$, choose $τ_{k-1}$ as the present time, and it is easy to verify that $\{τ_k<∞\}⋂K$ can be broken into $τ_{k-1}<∞,X_{τ_{k-1} }=v$, a future event and a historical event. Thus
Iteratively, we arrive at ${\Bbb P}\left( {\mathop \bigcap \limits_{j = 1}^{k+1} {H_j}|{\tau _k} < \infty ,K} \right) = \mathop \prod \limits_{j = 1}^{k+1} {\Bbb P}\left( {{H_j}|{\tau _k} < \infty ,K} \right)$.
Now the "independent" part of your problem is a piece of cake. The $k$th return times are nothing more than the stopping times $\tau_k$ discussed above, and finite state space and irreducibility guarantees they are finite. Events $\{Y_k=s_k\},k=1,2,...$ are nothing more than events determined by ${X_{{\tau _{k - 1}}}},{X_{{\tau _{k - 1}} + 1}}, \ldots ,{X_{{\tau _k} - 1}}$. Thus events $\{Y_k=s_k\},k=1,2,...$ are independent for any $s_k,k=1,2,...$. $Y_k,k=1,2,...$ are discrete RVs, then $Y_k,k=1,2,...$ are independent.