Show markov property for a s.p. with independent increment

120 Views Asked by At

I am reading a textbook and get confused on one example which shows $X_t$ with independent increments and $X_0=0$ is a Markov process. In the proof\begin{align*}\mathbb{P}(X_t\leq x\mid X_{t_1},\ldots ,X_{t_n}) & =\mathbb{P}(X_t-X_{t_n}+X_{t_n}\leq x\mid X_{t_1},\ldots ,X_{t_n}) \\ & =\mathbb{P}(X_t-X_{t_n}+X_{t_n}\leq x\mid X_{t_n}) \\ & =\mathbb{P}(X_t\leq x\mid X_{t_n}) \end{align*} I can understand that since the increments $X_t-X_{t_n}$ are independent, so we can remove $X_{t_1},\ldots ,X_{t_{n-1}}$ from the condition, but how to deal with the $X_{t_n}$? Is it also independent on $X_{t_1},\ldots ,X_{t_{n-1}}$?

2

There are 2 best solutions below

1
On

After doing some small manipulations about the idea of independent increment, I think I have figured out a method: \begin{align*} \mathbb{P}(X_t \leq x | X_{t_1},\ldots ,X_{t_n}) &= \mathbb{P}(X_t-X_{t_n} \leq x-X_{t_n} | X_{t_1},\ldots ,X_{t_n}) \\ &= \mathbb{P}(X_t-X_{t_n} \leq x-X_{t_n} |X_{t_n}-X_{t_{n-1}}, X_{t_{n-1}}- X_{t_{n-2}}, \ldots, X_{t_2} - X_{t_1}) \\ &= \mathbb{P}(X_t-X_{t_n} \leq x-X_{t_n} |X_{t_{n}}) \\ &= \mathbb{P}(X_t \leq x | X_{t_{n}}) \end{align*}

0
On

Let $\sigma(X)\perp \sigma(Y,Z)$, where $X,Y,Z\in L^1$. We want to show that for $f$ measurable and bounded we have $$E[f(X,Y)|Y,Z]=E[f(X,Y)|Y]$$ Define the set of functions $$\mathscr{H}:=\{h:E[h(X,Y)|Y,Z]=E[h(X,Y)|Y]\}$$ Define the family of sets $$\mathscr{A}:=\{X^{-1}(A)\cap Y^{-1}(B):A,B\in \mathcal{B}(\mathbb{R})\}$$ $\mathscr{A}$ is a $\pi$-system. Let $C\in \mathscr{A}$, we get that for any $D\in \sigma(Y,Z)$ $$\begin{aligned}E[\mathbf{1}_D\mathbf{1}_C]&=E[\mathbf{1}_D\mathbf{1}_{Y^{-1}(B)}\mathbf{1}_{X^{-1}(A)}]=\\ &=E[\mathbf{1}_D\mathbf{1}_{Y^{-1}(B)}]E[\mathbf{1}_{X^{-1}(A)}]=\\ &=E[\mathbf{1}_D\mathbf{1}_{Y^{-1}(B)}E[\mathbf{1}_{X^{-1}(A)}]]=\\ &=E[\mathbf{1}_D\mathbf{1}_{Y^{-1}(B)}E[\mathbf{1}_{X^{-1}(A)}|Y]]=\\ &=E[\mathbf{1}_DE[\mathbf{1}_{Y^{-1}(B)}\mathbf{1}_{X^{-1}(A)}|Y]]=\\ &=E[\mathbf{1}_DE[\mathbf{1}_C|Y]] \end{aligned}$$ so $$E[\mathbf{1}_C|Y,Z]=E[\mathbf{1}_C|Y]$$ so for all $C\in \mathscr{A}$ we have $\mathbf{1}_C\in \mathscr{H}$. Since linearity and monotone convergence hold for conditional expectations, then $\mathscr{H}$ is a monotone class for $\mathscr{A}$ and contains all bounded functions which are $\sigma(\mathscr{A})$-measurable. Now since $$\sigma(X)\cup\sigma(Y)\subseteq \mathscr{A}\subseteq \sigma(X,Y)\implies \sigma(\mathscr{A})=\sigma(X,Y)$$ the claim is proved. In your case $$f(u,v)=\mathbf{1}_{\{(u,v):u+v\leq x\}}(u,v)$$