Equivalence of definition of Markov chain

128 Views Asked by At

Here are two definitions of MC. On Durrett's text, it is proved that $(2)$ implies $(1)$. However, I am not sure whether $(1)$ implies $(2)$.

A sequence of random variables is said to be a Markov chain if \begin{align*} P(B|X_1,\cdots, X_n) = P(B|X_n)\quad a.s. \quad B\in \sigma(X_{n+1}), \forall n\geq 2 \end{align*}

Let $(S,\mathcal{S})$ be the state space. A function $p: S\times \mathcal{S}\rightarrow \mathbb{R}$ is said to be a transition probability if for each $A\in \mathcal{S}$, $p(\cdot, A)$ is measurable and for each $x\in S$, $p(x,\cdot)$ is a probability measure. And the distribution of $X_1,X_2,\cdots $ is \begin{align*} P(X_j\in B_j,0\leq j\leq n)=\int_{B_0}\mu(dx_0)\int_{B_1}p(x_0,dx_1)\cdots\int_{B_n}p(x_{n-1},dx_n) \end{align*}

My question is that, the second definition restricts the conditional distribution $P(X_{n+1}|X_n)$ to be the same for all $n\in \mathbb{N}$ (It is proved that $P(X_{n+1}\in B|X_n)=p(X_n,B)$ so we treat $p(\cdot, B)$ as the conditional distribution), while the first definition seems do not have this restriction. Does anyone have any idea?

Thanks in advance!

1

There are 1 best solutions below

0
On BEST ANSWER

You are correct: the second definition is for a time homogeneous Markov chain, the first allows non-homogeneity in time.