Question in a theorem about diffusion process

136 Views Asked by At

$\mathbf{Definition}$: A continuous $n$-dimensional Markov process with transition probability function $p(s, x, t, A)$ is called a diffusion process if:
(i) for any $\epsilon>0, t \geqslant 0, x \in R^{n}$, $$ \lim _{h \downarrow 0} \frac{1}{h} \int_{|y-x|>\epsilon} p(t, x, t+h, d y)=0 \quad (1) $$ (ii) there exist an $n$-vector $b(x, t)$ and an $n \times n$ matrix $a(x, t)$ such that for any $\epsilon>0, t \geqslant 0, x \in R^{n}$, $$\lim _{h \downarrow 0} \frac{1}{h} \int_{|y-x|<\epsilon}\left(y_{i}-x_{i}\right) p(t, x, t+h, d y)=b_{i}(x, t)\quad(1 \leqslant i \leqslant n) \quad (2)$$ $$\lim _{h \downarrow 0} \frac{1}{h} \int_{|y-x|<\epsilon}\left(y_{i}-x_{i}\right)\left(y_{j}-x_{i}\right) p(t, x, t+h, d y)=a_{i j}(x, t)\quad (1 \leqslant i, j \leqslant n) \quad (3)$$ where $b=\left(b_{1}, \ldots, b_{n}\right), a=\left(a_{i j}\right) .$ The vector $b$ is called the drift coefficient and the matrix $a$ is called the diffusion coefficient.

$\mathbf{Lemma}$ The following conditions imply the conditions (i), (ii):
(i*) for some $\delta>0, t \geqslant 0, x \in R^{n}$ $$ \lim _{h \downarrow 0} \frac{1}{h} \int_{R^{n}}|x-y|^{2+\delta} p(t, x, t+h, d y)=0\quad(1') $$ (ii*) for any $t \geqslant 0, x \in R^{n}$, $$ \lim _{h \downarrow 0} \frac{1}{h} \int_{R^{n}}\left(y_{i}-x_{i}\right) p(t, x, t+h, d y)=b_{i}(x, t) \quad(1 \leqslant i \leqslant n) \quad(2') $$ $$ \lim _{h \downarrow 0} \frac{1}{h} \int_{R^{n}}\left(y_{i}-x_{i}\right)\left(y_{j}-x_{t}\right) p(t, x, t+h, d y)=a_{i j}(x, t) \quad (1 \leqslant i, j \leqslant n)\quad(3') . $$

$\mathbf{Theorem}$ Let $b(x, t), \sigma(x, t)$ be measurable continuous in $(x, t) \in R^{n} \times[0, \infty)$ and satisfy Lipschitz and linear growth condition. $\xi_0$ is independent of $\mathcal{F}(w(t),t\ge 0)$ and $E|\xi_0|^2<\infty$. Then the solution of $$d\xi(t)=b(t,\xi(t))dt+\sigma(t,\xi(t))dw(t)$$ $$\xi(0)=\xi_0 \quad a.s.$$ is a diffusion process with drift $b(x, t)$ and diffusion matrix $a(x, t)=\sigma(x, t) \sigma^{*}(x, t)$.

In the proof of the theorem here, I am confused about how to show that the limit in (2) and (3) exists. Here we are given a lemma so that we can check (1')-(3') instead of checking the condition (1)-(3) from definition. However, in the proof of Theorem, author does not check condition (1'). So I wonder how the assumption (1') is satisfied here? (Apparently (1) and (1') is very different. Is the reason behind related to the second moment of $\xi(0)$ is finite?

Supplement: In the proof of the theorem, I notice that we have such inequality $$E(|\xi(t)-\xi_0|^4)\le C(t-0)^2$$ so, does it mean that (1') is satisfied since 4th moment is finite implies that we can choose $\delta=2$?

1

There are 1 best solutions below

1
On

Yes, $\delta=2$ here so that 1’ is satisfied.