Let $P$ be a Markov kernel on $X \times \mathcal{X}$ and $\xi$ be a probability measure on $\mathcal{X}$. Suppose that $\xi$ is reversible with respect to $P$, i.e. $$\xi \otimes P(A \times B)= \xi \otimes P (B\times A)$$ where $\xi \otimes P(A\times B)=\int_A \xi(dx)P(x,B)$.
Then the time homogeneous Markov chain $X_k$ with Markov kernel $P$ and initial distribution $\xi$ is reversible, i.e. for all $n\in \mathbb{N}$, $(X_0,\dots, X_n)$ and $(X_n,\dots, X_0)$ have the same distribution.
The proof reads as follows.
The proof is by induction. For $n=1$, clearly reversibility means $(X_0,X_1)$ and $(X_1,X_0)$ have the same distribution. Assume that such is the case for some $n\ge 1$. By the Markov property, $X_0$ and $(X_1,\dots, X_n)$ are conditionally independent given $X_1$, and $X_{n+1}$ and $(X_n,\dots, X_0)$ are conditionally independent given $X_1$. Moreover, by stationarity and reversibility, $(X_{n+1},X_n)$ has the same distribution as $(X_0,X_1)$ and by the induction assumption, $(X_1,\dots, X_{n+1})$ and $(X_n,\dots , X_0)$ have the same distribution. This proves that $(X_0,\dots , X_{n+1})$ and $(X_{n+1},\dots ,X_0)$ have the same distribution.
Questions.
1. Why do we have $X_{n+1}$ and $(X_n,\dots, X_0)$ are conditionally independent given $X_1$?
I think this is a typo and should be $X_{n+1}$ and $(X_n,\dots, X_1)$ are conditionally independent given $X_0$, but still I don't know how to derive this from the conditional independence of Markov property, which states that $$E[YZ|X_k] = E[Y|X_k]E[Z|X_k]$$ for all bounded $\sigma(X_j,j\ge k)$-measurable random variable $Y$ and bounded $\mathscr{F}_k^X$-measurable random variable $Z$. Here, we have time in the reverse order, so how do we apply the standard conditional independence of the past and the future of a Markov chain given its present state?
2. Next, given this and the fact that $(X_{n+1},X_n)$ has the same distribution as $(X_0,X_1)$ and by the induction assumption, $(X_1,\dots, X_{n+1})$ and $(X_n,\dots , X_0)$ have the same distribution, how do we derive that $(X_0,\dots , X_{n+1})$ and $(X_{n+1},\dots ,X_0)$ have the same distribution?
I would greatly appreciate some help here.
I think I have proved this and will leave an answer here as a reference.
So the statement is false as is. It should be that $X_{n+1}$ and $(X_n,\dots, X_0)$ are independent given $X_n$. And $X_0$ and $(X_1,\dots, X_{n+1})$ are independent given $X_1$ from the Markov property.
Now by induction assumption and stationarity of the Markov chain, we have $(X_0,X_1) \equiv (X_{n+1},X_n)$ and $(X_1,\dots X_{n+1}) \equiv (X_n, \dots, X_0)$ in distribution.
Hence, we have $X_{n+1}|X_n \equiv X_0|X_1$ and $(X_1,\dots X_{n+1})|X_1 \equiv (X_n,\dots, X_0)|X_n$. By this I mean the conditional distribution that is given by the Markov kernel, and since each marginal distribution is the same, the above conditional distributions are the same from the induction assumption.
Therefore, using the conditional independence, we have $(X_0,\dots X_{n+1})|X_1 \equiv X_0|X_1 \otimes (X_1,\dots X_{n+1}|X_1)$ where $\otimes$ denotes the product distribution.
Similarly, $(X_{n+1},\dots X_0)|X_n \equiv X_{n+1}|X_n \otimes (X_n,\dots , X_0)|X_n$.
Now we can take the integral over the distribution of $X_1$ and $X_n$ on to get the joint distributions for $(X_0,\dots , X_{n+1})$ and $(X_{n+1},\dots , X_0)$, respectively, and using the equivalence relations above and the fact that marginal distributions are all the same, we get that $(X_0,\dots , X_{n+1})$ and $(X_{n+1},\dots , X_0)$ have the same distribution.