Different definitions of markov process

94 Views Asked by At

A somewhat general definition of markov process seems to be the following. Let $ X_s, ~ s\in B\subset [0,\infty) $ be a (real valued) stochastic process. It is said to have the markov property if

$$ E[Y| {F}_{\leq t}]=E[Y| {F}_{= t}]$$

Where $ t\in B$,$ Y $ is $ {F}_{\geq t} $-measurable, and ${F}_{\leq t} ,{F}_{= t}, {F}_{\geq t} $ are the sigma algebras generated by the respective sets of $X_s$.

By contrast a discrete time stochastic process $ X_n, ~ n=1,...,N $ (each now defined on the same finite probability space) is said to have the markov property if $$\mathscr{P}[X_{n}=x_{n}| X_{n-1}=x_{n-1}…X_{1}=x_{1} ]= \mathscr{P}[X_{n}=x_{n}| X_{n-1}=x_{n-1}]$$

my question is: Why is the second definition as strong as the more general one? In particular why in the latter case does the definition imply that $$\mathscr{P}[X_{n-1}=x_{n-1},X_{n}=x_{n}, ...X_{N}=x_{N}| X_{n-1}=x_{n-1}…X_{1}=x_{1} ]= \mathscr{P}[X_{n-1}=x_{n-1},X_{n}=x_{n},...X_{N}=x_{N}| X_{n-1}=x_{n-1}]$$ as is required by the first definition?

1

There are 1 best solutions below

3
On BEST ANSWER

We denote by $S$ the state space. Let $A= \{X_{n+1} = x_{n+1}\}$ and $B = \{X_n=x_n , X_{n-1}=x_{n-1} \}$. We have that

$$ \mathbb{P}(A ~\vert~ B) = \sum_{\Lambda} \mathbb{P}(A ~\vert~ B_{\lambda} )\mathbb{P}(B_{\lambda}), $$ where $\Lambda = S^{n-2}$ and for a given $\lambda = (a_1, \ldots, a_{n-2}) \in \Lambda$, $B_{\lambda} = \{ X_1 = a_1, \ldots, X_{n-2}=a_{n-2}, X_{n-1} =x_{n-1}, X_n =x_n\}$.

Since $\mathbb{P}(A ~|~ B_{\lambda}) = \mathbb{P}(A ~|~ X_n = x_n), \forall \lambda \in \Lambda$, we have

$$\mathbb{P}(X_{n+1} = x_{n+1} ~\vert~ X_n=x_n, X_{n-1}= x_{n-1}) = \mathbb{P}(A ~\vert~ B) = \mathbb{P}(A ~|~ X_n = x_n)\frac{\sum_{\Lambda} \mathbb{P}(B_{\lambda})}{\mathbb{P}(B)} = \mathbb{P}(A ~|~ X_n = x_n) = \mathbb{P}(X_{n+1} =x_{n+1} ~|~ X_n = x_n).$$

To conclude, note that

\begin{eqnarray} & &\mathbb{P}\left(X_{n+1}=x_{n+1}, X_{n}=x_n ~ \vert~ X_{n-1}=x_{n-1}, \ldots, X_1=x_1 \right) = \frac{\mathbb{P}(X_{n+1}=x_{n+1}, \ldots, X_1 = x_1)}{\mathbb{P}(X_{n-1}=x_{n-1}, \ldots, X_1 = x_1)} = \frac{\mathbb{P}(X_{n+1}=x_{n+1} ~\vert~ X_n=x_n, \ldots, X_1=x_1)\mathbb{P}(X_n=x_n, \ldots, X_1=x_1)}{\mathbb{P}(X_{n-1}=x_{n-1}, \ldots, X_1 = x_1)} = \mathbb{P}(X_{n+1} = x_{n+1} ~\vert~ X_n=x_n)\mathbb{P}(X_n=x_n ~\vert~ X_{n-1}=x_{n-1} \ldots, X_1=x_1) = \mathbb{P}(X_{n+1} = x_{n+1} ~\vert~ X_n=x_n, X_{n-1}= x_{n-1})\mathbb{P}(X_{n} = x_n ~\vert~ X_{n-1}=x_{n-1}) = \mathbb{P}(X_{n+1}=x_{n+1}, X_n = x_n ~\vert ~ X_{n-1}=x_{n-1}). \end{eqnarray}

PS: For $N$ instead $n+1$ is analogue.