How to prove the convergence of markov chains on continuous state space?

24 Views Asked by At

Given a discrete time Markov chain $\{X_t,t=1,2,...\}$ that takes values on a continuous state space $S$. The transition kernel of $X_t$ is marked as $P(x,A)$, i.e. the probability of reaching the measureable set $A$ from state $x$.

Then, the stationary distribution of $X_t$ (marked as $\pi$) can be obtained by solving

$\pi(A)=\int P(x,A)\pi(x)dx,\forall A\subset S$

My questions is, for given transition kernel $P(x,A)$, how to prove the existence of the stationary distribution $\pi$?

Furthermore, if I want to learn more about stationary distribution and convergence properties of continuous state space (discrete time) Markov chains, which books or papers could I read?