Why are these different notions of the markov property equivalent:
- $$\forall A\in\mathcal{S}\qquad \mathbb{P}(X_t\in A|\mathcal{F}_s)=\mathbb{P}(X_t\in A|X_s)$$
- $$\forall f:S\to\mathbb{R} \text{ bounded and measurable}\qquad \mathbb{E}[f(X_t)|\mathcal{F}_s]=\mathbb{E}[f(X_t)|X_s]$$
for all $t\ge s\ge0$, where $(X_t)_{t\ge0}$ is a $(S,\mathcal{S})$-valued adapted stochastic process on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with a filtration $(\mathcal{F}_t)_{t\ge 0}$.
Remark: By using indicator functions ($f=1_A$) (1.) easily follows from (2.), but how to proof the other direction?
Then there exists another definition:
- $$\forall n\in \mathbb{N}\quad\forall t_0\le t_1 \le \dots\le t_n \le s \le t \quad \forall i,j,i_0,\dots,i_n\in S \qquad \mathbb{P}(X_t=j|X_s=i,X_{t_n}=i_n,\dots,X_{t_0}=i_0)=\mathbb{P}(X_t=j|X_s=i)$$
But if I am not mistaken, this in general only holds, if the state space S is countable AND the filtration is generated by $(X_t)_{t\ge0}$? (then it can be easily deduced from (1.))