Understanding proof of simple Markov property

80 Views Asked by At

The simple Markov property says that if $X$ is $Markov(\lambda, P)$, then conditional on $X_m = i$, $(X_{m+n})_{n\geq 0}$ is $Markov(\delta_i, P)$ and is independent of the random variables $X_0, ..., X_m$.

I have proved the first part of the theorem, that $(X_{m+n})_{n\geq 0}$ is $Markov(\delta_i, P)$ using the law of total probability. However, I'm not sure how to prove independence. It suffices to show that $$ P(X_{i_1} = x_1, ..., X_{i_k} = x_k, X_0=x_0', ..., X_m = x_m' | X_m = i) = P(X_{i_1} = x_1, ..., X_{i_k}=x_k|X_m=i)P(X_0=x_0', ..., X_m = x_m' | X_m = i) $$ but I'm not sure how. I think that the property of $Markov(\lambda, P)$ chains that $$P(X_0=x_0, \dots, X_n = x_n) = \lambda(x_0) P(x_0, x_1) \cdots P(x_{n-1}, x_n)$$ may help, but I can't apply this directly.

I suspect the derivation involves summing over the possible values for the intermediary random variables (using the law of total probability), but I can't quite get the algebra to follow.

1

There are 1 best solutions below

0
On BEST ANSWER

Indeed, one can't apply the property directly, but by explicitly introducing the intermediary variables and summing over using the law of total probability, the desired result follows from a long (though routine) calculation.