Equivalence of definitions of Markov Process.

68 Views Asked by At

I have come across two definitions of Markov processes the first of which is (paraphrased) below (J Sun & TD Lynch, 2008; pg482):

Definition 1 (D1)

A process in which the future probability conditional on the present and past values is equal to the future probability conditional only on the present value. In mathematical notation: $$P(x_0,t_0|x_1,t_1;x_2,t_2;\ldots; x_n t_n)=P(x_0,t_0|x_1,t_1)$$ with $t_0\gt t_1 \gt \ldots \gt t_n$.

the other definition (which unfortunately - I don't have an explicit i.e. publicly available reference) is:

Definition 2 (D2)

A process $x(t)$ is a Markov process if future values can be evaluated from current values without any reference to past values.

Take for example the Langevin equation*. My interpretation of this is that we can write: $$x(t_0)=x(t_1) +\int^{t_1}_{t_0} \eta(t)dt$$ where if the noise $\eta(t)$ was anything but delta correlated it would depend on the previous history and this would then not be a Markov process.

The equivalence between D1 and D2 is intuitively valid. But can it be proved mathematically?

* Only an example and not the focus of the question - which is about general processes.