Construction of pre-Brownian motion from Markov transition kernels

275 Views Asked by At

I am reading very nice notes on probability theory by Bruce Driver, but since I didn't have a lecture on the subject I sometimes get confused. In particular, I am confused about the existence of a pre-Brownian motion (process with independent increments where the increments are normally distributed, i.e. a Brownian motion but not necessarily continuous). Corollary 22.27 states the existence of pre-B motion and is based on exercise 17.8 (which says that if the pre-B motion would exist it would have some properties) and Theorem 17.11 which states that given Markov transition kernels we can construct a Markov process with those kernels. However, I am having a hard time piecing this together.

2

There are 2 best solutions below

11
On

What you call 'pre-Brownian motion' is just the usual Brownian motion, but considered only on events in the cylindrical sigma algebra. The cylindrical sigma algebra is a 'very small' sigma algebra generated by finite dimensional random variables of the form $f(B_{t_1},...,B_{t_n})$; for instance supremums of the process do not belong to it: the supremum of 'pre-Brownian motion' simply does not make direct sense.

The reason some authors consider it nonetheless is because of the Kolmogorov extension theorem https://en.wikipedia.org/wiki/Kolmogorov_extension_theorem, which states that basically any (in particular Markov) family of consistent (in terms of marginalization; e.g. for Markov it's just the semi-group property of probability transitions) finite dimensional distributions generates a well-defined process distribution on the cylindrical sigma algebra. What you seem to struggle with is the direct application of this Kolmogorov extension theorem to the case of the heat kernel.

Now note that this process distribution may be a.s. continuous in some sense, it's just that this statement does not make sense on this restrictive cylindrical sigma algebra. So the point is not that 'pre-Brownian motion' is "not necessarily continuous", the point is that "continuity does not make sense" for pre-Brownian motion.

The magic trick, which is missing in the refs you are mentioning is the Kolmogorov continuity theorem https://en.wikipedia.org/wiki/Kolmogorov_continuity_theorem which ensures that you can uniquely 'lift' your 'pre-Brownian motion' to a proper stochastic process seen as a random variable in the Polish space of continuous paths (endowed with the uniform norm). By doing so, you considerably extend the sigma algebra on which your process is defined: any path functional which is Borel measurable for continuous trajectories will work. Kolmogorov continuity theorem exactly tells you that there is a unique natural way to do that from the cylindrical sigma aglebra.

Finally, note that the Kolmogorov extension theorem is very general, but very abstract, and requires the underlying probability space to be 'large' (not just (0,1) with Uniform). From the perspective of constructive mathematics, mathematical physics, or simulation, it is not very satisfactory. Some authors try to avoid it. All basic cadlag random processes can be constructed without it, see for instance Levy's construction of Brownian motion.

Finally, I don't think that considering Kolmogorov extension (abstract existence) without Kolomogorov continuity (continuous version + uniqueness) is insightful. Both are better, and nicer, together.

0
On

I would like to try to answer my question, maybe I can get some feedbacks. Exercise 17.8 says that the heat kernels satisfy the Chapman-Kolmogorovs equation and theorem 17.11 thus says that we can construct a Markov process with these as transition kernels.

What is left to show is that if we have constructed a Markov process $(X_t)_{t\ge0}$ with the heat kernels as its transition kernels, this process has increments $X_t-X_s\sim \mathcal{N}(0,t-s)$ and if $t_1<t_2<\ldots<t_n$ we have $X_{t_2}-X_{t_1},X_{t_3}-X_{t_2},\ldots,X_{t_n}-X_{t_{n-1}}$ are independent increments. First note that by theorem 17.10 we have

$$law(_,_)(,)=_{−0}(0,)_{−}(,)=\frac{1}{\sqrt{2}}^{−\frac{1}{2 }^2} \frac{1}{\sqrt{2(−)}}^{−\frac{1}{2(−)}(−)^2}.$$ Using the change of variable formula we have \begin{align} P(X_t-X_s\in A) =&E[1_A(X_t-X_s)]\\ =&\int 1_A(y-x)\frac{1}{\sqrt{2 \pi s}}e^{−\frac{1}{2 s}^2} \frac{1}{\sqrt{2\pi(t-s)}}e^{−\frac{1}{2(t−s)}(y−x)^2}dxdy\\ =&\int 1_A(z)\frac{1}{\sqrt{2\pi s}}e^{−\frac{1}{2 s}^2} \frac{1}{\sqrt{2\pi(t-s)}}e^{−\frac{1}{2(t−s)}z^2}dxdz\\ =&\int 1_A(z) \frac{1}{\sqrt{2\pi(t-s)}}e^{−\frac{1}{2(t−s)}z^2}dz\\ =&E[1_A(Z)] \end{align}

where $Z\sim\mathcal{N}(0,t-s)$. We used the change of variable $x=x$, $z=y-x$ for which the jacobian is $1$ and the fact that $\int\frac{1}{\sqrt{2}}^{−\frac{1}{2 }^2}dx=1$.

For the independence I just show it for $t_1<t_2<t_3$ as the general case is similar :

\begin{align} law(X_1,X_2,X_3)(dx_1,dx_2,dx_3)&=q_{t_1-0}(0,dx_1)q_{t_2-t_1}(x_1,dx_2)q_{t_3-t_2}(x_2,dx_3)\\ &=\frac{1}{\sqrt{2\pi t_1}}e^{-\frac{x_1}{2t_1}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{x_2-x_1}{2(t_2-t_1)}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{x_3-x_2}{2(t_2-t_1)}} \end{align}

Again by the change of variable formula : \begin{align} &P((X_2-X_1,X_3-X_2)\in A)\\=&E[1_A(X_2-X_1,X_3-X_2)]\\ =&\int 1_A(x_2-x_1,x_3-x_2)\frac{1}{\sqrt{2\pi t_1}}e^{-\frac{x_1}{2t_1}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{x_2-x_1}{2(t_2-t_1)}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{x_3-x_2}{2(t_2-t_1)}}dx_1dx_2dx_3\\ =&\int 1_A(u_2,u_3)\frac{1}{\sqrt{2\pi t_1}}e^{-\frac{u_1}{2t_1}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{u_2}{2(t_2-t_1)}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{u_3}{2(t_2-t_1)}}du_1du_2du_3\\ =&\int 1_A(u_2,u_3)\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{u_2}{2(t_2-t_1)}}\frac{1}{\sqrt{2\pi (t_2-t_1)}}e^{-\frac{u_3}{2(t_2-t_1)}}du_2du_3\\ =&E[1_A(Z_1,Z_2)] \end{align} where $(Z_1,Z_2)\sim \mathcal{N}(0,\begin{bmatrix}t_2-t_1&0\\0&t_3-t_2\end{bmatrix})$ in particular the components are independent. We have used the change of variable $x_1=u_1, x_2-x_1=u_2,\ldots x_n-x_{n-1}=u_n$ which has jacobian $1$.