Question regarding Notes on Strong Markov Property

90 Views Asked by At

I wrote the following notes from a lecture a couple of weeks ago and I don't understand a particular line.

Suppose $B_t$ is a Brownian Motion.

Now look at $B^x_t = x + B_t$ which is a BM starting at $x$.

Let $\Omega$ be the set of continuous functions on $[0,\infty)$, $X_t(\omega) = \omega(t)$.

Assume that $\sigma$-algebra is generated by $X_t$.

Let $P^x$ on $(\Omega',F')$ be given by $P^x(X_{t_1}\in A_1, X_{t_2}\in A_2, ....., X_{t_n}\in A_n)$

The line of text that I do not understand Basically we have moved from 1 probability space $\Omega$ with many $B_t^x$ to one process $X_t$ with many probability measures $P^x$.

Would someone be able to kindly elaborate why the above statement is true? (Perhaps I may have written the notes wrongly)

(Cont'd)

The advantage of this is so we can define a shift operator $\theta_t:\Omega'\to \Omega$ by $\theta_t(\omega)(s) = \omega(t+s)$.

1

There are 1 best solutions below

1
On

Well, in the first case, when you consider a Brownian motion $B_t$, the following is usually understood: You are considering a background probability triple $(\Omega,\mathcal{F},P)$, with $\Omega$ being some set, $\mathcal{F}$ being some $\sigma$-algebra and $P$ being some probability space, and you consider a measurable mapping $B:\Omega\to C[0,\infty)$ such that the distribution of $B$ - technically, the pushfoward measure $B(P)$ - is the Brownian motion distribution (given by a specification of its Gaussian finite-dimensional distributions). When you define $B^x_t = x + B_t$, you are defining this mapping on $\Omega$, that is, you define $$ B^x : \Omega \to C[0,\infty), \\ B^x_t(\omega) = x + B_t(\omega). $$ You now have a single probability space $(\Omega,\mathcal{F},P)$ endowed with many mappings $B^x$.

The other case you describe, with the probability measures $P^x$ and so forth, is the following. Consider the set $\Omega' = C[0,\infty)$. Let $w:\Omega'\to\Omega'$ be the identity mapping. For each $t\ge0$, $w_t$ is then a mapping from $\Omega'$ to $\mathbb{R}$, the projection mapping corresponding to the index $t$. We define a $\sigma$-algebra $\mathcal{F}$ by letting $\mathcal{F}$ be the smallest $\sigma$-algebra making each $w_t$ measurable. Now let $P^x$ be a probability measure on $\Omega'=C[0,\infty)$, specifically, let $P^x$ be the distribution of Brownian motion starting in $x$. This is a probability measure which is uniquely specified through its finite-dimensional distributions $$ P^x(w_{t_1}\in A_1,\ldots,w_{t_n}\in A_n) $$ and whose existence, by the way, is rather non-trivial. For each $x$, the probability measure $P^x$ is a probability measure on $(\Omega',\mathcal{F}')$. Thus, you now have a measure space with one actual mapping $w:\Omega'\to\Omega'$ but with many measures $P^x$. For each $x$, the distribution of $w$ relative to the probability triple $(\Omega',\mathcal{F}',P^x)$ is $w(P^x) = P^x$ (the equality holds as $w$ is the identity), thus, the distribution is the Brownian motion starting in $x$. In other words, in this case, you have a construction where you have "one process $w$" and many probability measures on the space where $w$ is defined, each giving rise to a different distribution of $w$.

All this may seem like a rather bizarre construction. However, it can be convenient when working with distributional properties of stochastic processes, as the space $\Omega'$, and not the abstract, somewhat unspecified space $\Omega$, is where the stochastic process distributions really "live". This methodology is particularly useful in the context of for example Markov processes and marked point processes.