If two Markov process have same $2$-dimensional distribution then they are equivalent

167 Views Asked by At

My problem:

Let $(\Omega,\mathcal{F}, \mathbb{P}, \mathcal{F}_t)$ be a filtered probability space and $(E,\mathcal{E})$ a measure space. If $X,Y:\mathbb{R}_{+}\times\Omega\to E $ are Markov processes (with respect to $\mathcal{F}_t)$ with the same 2-dimensional distributions, then they are equivalent (they have the same law).

My attempt: I observed that $\mathbb{E}[\mathbb{I}_{X_s \in A}\mathbb{I}_{X_t \in B}]=\mathbb{E}[\mathbb{I}_{X_s \in A}\mathbb{E}[\mathbb{I}_{X_t \in B}|\mathcal{F}_s]]=\mathbb{E}[\mathbb{I}_{X_s \in A}p(s, t, X_s, B)]$ where $p(s, t) : E \times \mathcal{E} \to [0,1]$ is the transition kernel of $X$. Thus by hypothesis we obtain that: $$\mathbb{E}[\mathbb{I}_{X_s \in A}p(s, t, X_s, B)]=\mathbb{E}[\mathbb{I}_{Y_s \in A}q(s, t, Y_s, B)]$$ We now have that $p(s,t)(x, B)=q(s,t)(x, B)$ for almost every $x \in E$ with respect to the measure $(X_s)_*(\mathbb{P})=(Y_s)_*(\mathbb{P})$ i.e. the push forward probability of $X_s$ onto $E$. Why should this imply that the two processes have the same finite-dimensional distributions? I think that I should write $\mathbb{E}[\mathbb{I}_{X_{t_1} \in A_1} \dots \mathbb{I}_{X_{t_n} \in A_n}]$ as an integral involving $p$ with respect to measure with whom $p$ and $q$ are equal but I do not know how and I am not sure that this would conclude.

1

There are 1 best solutions below

0
On

Let me write $X_i = X_{t_i}$ and similarly for $A_i.$ I use $\mu_0$ for the law of $X_{t_0}.$ We write $$ p(s,t; x_s, A) = P(X_t \in A \mid X_s = x_s) = \int\limits_A p(s,t; x_s, dx_t). $$ Then, the fact that $X$ is a Markov process entails that when we condition on a given time, the (strict) past and future are independent. We can write, at least heuristically, $$ \begin{align*} P(X_n \in A_n, \ldots, X_0 \in A_0) &= \int\limits_{A_0} d\mu_0(x_0) P(X_n \in A_n, \ldots, X_1 \in A_1 \mid X_0 = x_0) \\ &= \int\limits_{A_0} d\mu_0(x_0) \int\limits_{A_1} p(t_0, t_1; x_0, dx_1) P(X_n \in A_n, \ldots, X_2 \in A_2 \mid X_1 = x_1) \\ &= \ldots \\ &= \int\limits_{A_0} d\mu_0(x_0) \int\limits_{A_1} p(t_0, t_1; x_0, dx_1) \ldots \int\limits_{A_n} p(t_{n-1}, t_n; x_{n-1}, dx_n). \end{align*} $$ Therefore, the transition involving $n$ different times are determined by the transitions involving pair of times. Also, as far as I know, the left-most side above is defined to be the right-most side above, and you only show that a measure $P$ exists using existence arguments (such as Kolmogorov's extension theorem), as this equality (left-most=right-most) is defined solely in $n$-dimensional cylinders. Other way of writting this is as follows. Consider a family of times $(t_k)$ and a family of positions $(x_k),$ where $k = 0, \ldots, n$ and $t_1 < \ldots < t_k.$ We know that the law of a stochastic process is determined by its finite-dimensional distributions. So, introduce the probability law $$ \begin{align*} r((t_k); (dx_k)) &= P(X_{t_0} \in dx_0, \ldots, X_{t_n} \in dx_n) \\ &= P(X_{t_0} = x_0, \ldots, X_{t_n} = x_n) dx_0 \cdots dx_n, \end{align*} $$ in other words, $r((t_k); (dx_k))$ is the finite-dimensional distribution of the vector $(X_{t_k}).$ The Markov processes satisfy $$ r((t_k); (dx_k)) = \mu_{t_0}(dx_0) p(t_0, t_1; x_0, dx_1) p(t_1, t_2; x_1, dx_2) \cdots p(t_{n-1}, t_n; x_{n-1}, dx_n), $$ where the $p(s,t;x_s, dx_t)$ is defined as above (the conditional law of $X_t$ given $X_s = x_s).$ I hope this helps!