I quote Schilling, Partzsch (2012)
Markov property of Brownian motion Let $(B(t))_{t\ge0}$ be a $d$-dimensional Brownian motion and denote by $W(t):=B(t+a)-B(a)$ a "shifted" Brownian motion. Then $((B(t))_{0\le t\le a}$ and $(W(t))_{t\ge0}$ are independent, i.e. the $\sigma$-algebras generated by these processes are independent: $$\sigma\left(B(t): 0\le t\le a\right):=\mathcal{F}_a^B\perp \!\!\! \perp\mathcal{F}_{\infty}^W:=\sigma\left(W(t): 0\le t<\infty\right)\tag{1}$$ Proof Let $X_0,X_1,\ldots X_n$ be $d$-dimensional random variables. Then $$\sigma(X_j:j=0,\ldots,n)=\sigma(X_0,X_j-X_{j-1}: j=1,\ldots,n)$$ [...]
Let $0=s_0<s_1\cdots <s_m=a=t_0<t_1<\cdots<t_n$.
[After some passages, using that $W(t_k-t_0)-W(t_{k-1}-t_0)=B(t_k)-B(t_{k-1})$ and $B(0)=W(0)=0$]
$$\bigcup_{0<s_1<\cdots<s_m\le a}\sigma\left(B(s_j): j=1,\ldots,m\right)\perp \!\!\!\perp\bigcup_{0<u_1<\cdots<u_n}\sigma\left(W(u_k): k=1,\ldots,n\right) \tag{2}$$ $\color{red}{\text{The families on the left and right-hand side of (2) are }\cap\text{-stable generators }}$ of $\mathcal{F}_a^B$ and $\mathcal{F}_{\infty}^W$, respectively, thus $\mathcal{F}_a^B\perp\!\!\!\perp\mathcal{F}_{\infty}^W$.
Set $A=\bigcup_{0<s_1<\cdots<s_m\le a}\sigma\left(B(s_j): j=1,\ldots,m\right)$ and $B=\bigcup_{0<u_1<\cdots<u_n}\sigma\left(W(u_k): k=1,\ldots,n\right)$.
If I properly understand, the statement in $\color{red}{\text{red}}$ means that $A$ is closed under intersection and $B$ is closed under intersection.
In general, could you please explain:
- if my interpretation is correct;
- how one can explicitly show that $A$ is $\cap-$stable and $B$ is
$\cap-$stable;
?
The statement in $\color{red}{\text{red}}$, and then what follows it, mean three things , all of which need discussion :
and
and
The first is a fact that can be easily verified.
First, let us understand what is $A$. We have : $$ A = \bigcup_{0 < s_1<...<s_m \leq a} \sigma(B(s_i) : i=1,...,m) $$
So $A$ is the union over all sigma-algebras, generated by finitely many indices lying between $0$ (not included) and $a$ (may be included). By "the elements of the union forming A", I mean a single $\sigma(B(s_i) : i=1,2,...,m)$ for a choice of $0<s_1<s_2<...<s_m \leq a$.
For example , take say $a=7$.
$\sigma(B(3))$ is in the union, because $0<3 \leq 7$.
$\sigma(B(0.5),B(1),B(3),B(2\pi),B(7))$ is in the union, because $0<0.5<1<3<2 \pi < 7 \leq 7$.
$\sigma(B(1),B(8))$ is not in the union because $8>7$.
So , this is what I mean by "the elements in the union forming $A$".
Let us put this in words : when we say that an event belongs in a sigma-algebra determined by some random variables, it means that if we know the value of all those random variables, we know whether this event happened or not.
So, $\sigma(B(1),B(2))$, for example, is the set of all events which are determined by $B(1)$ and $B(2)$. The event $\{B(2) \leq 5 , B(2) e^{B(1)} \leq 9\}$ would lie in this sigma-algebra, but not $B(1) - B(0.5) \leq 2$.
Now, let us provide a heuristic proof that $A$ is closed under intersection. Let $S_1$ and $S_2$ belong in $A$. Then they belong in that union, so each one belongs to one of the elements of that union. For example, say that $S_1$ belongs to $\sigma(B(1),B(6))$ and $S_2$ belongs to $\sigma(B(0.5),B(\pi-1), B(e^{1.8}))$. What that means is this : $S_1$ is determined completely by $B(1)$ and $B(6)$, and $S_2$ is determined completely by $B(0.5),B(\pi-1)$ and $B(e^{1.8})$.
What is a logical guess for what $S_1 \cap S_2$ is determined by? Well, if we knew all of $B(0.5),B(1),B(\pi-1),B(6)$ and $B(e^{1.8})$ , we would know about both $S_1$ and $S_2$ and hence about $S_1 \cap S_2$. In other words, $\sigma(B(0.5),B(1),B(\pi-1),B(6),B(e^{1.8}))$. This is also one of the elements in the union which forms $A$, because $0 < 0.5 < 1 < \pi-1<6<e^e<7$. Hence $S_1 \cap S_2$ belongs in $A$.
Let us now go to the algebra. We have :
$$ S_1 \in \sigma(B(s_1),...,B(s_l)) \\ S_2 \in \sigma(B(t_1),...,B(t_m)) $$
for some $0 <s_1 < s_2<...<s_l \leq a$ and $0 < t_1<...<t_m \leq a$. Now, consider the set of indices $\{s_i\} \cup \{t_j\}$ (it is a set, so if some $s_i= t_j$ we count that just once). Call this set as $\{u_i\}_{i=1,...,N}$, and sort it ascending like $0<u_1<u_2<....<u_N \leq a$. It is clear that $S_1 \in \sigma(B(u_1),...,B(u_N))$ and $S_2 \in \sigma(B(u_1),...,B(u_N))$ because this sigma-algebra contains both the sigma-algebras which $S_1,S_2$ belong to. By the intersection closure property of a sigma-algebra, $S_1 \cap S_2 \in \sigma(B(u_1),...,B(u_N))$. But then, $\sigma(B(u_1),...,B(u_N))$ is one of the elements in the union which forms $A$. It follows that $S_1 \cap S_2 \in A$.
In a similar way, I encourage you to show that $B$ is closed under intersection.
The second part comes from definition : The usual Brownian motion filtration is the sigma-algebra generated by all finite-dimensional cylinder sets , each of which falls under a sigma-algebra of the kind described. For example, see equation $(2.16)$ on page $15$ of Schilling-Partzsch : a similar thing holds here, and it is a definition.
For the third, we use the Dynkin $\pi-\lambda$ theorem. Basically, we know that $A$ and $B$ are independent, and that $A$ generates $\mathcal F^B_a$ and that $B$ generates $\mathcal F^W_\infty$. We will prove this in two steps, although you could compress.
We will prove that if $A \perp\!\!\!\perp B$ then $\mathcal F^B_a \perp \!\!\! \perp B$.
From $\mathcal F^B_a \perp \!\!\!\perp B$ we will prove that $\mathcal F^B_a \perp \!\!\!\perp \mathcal F^W_{\infty}$. This proof will be very similar, only a switching of roles will be required compared to the first step.
Define $\mathcal G = \{C \in \mathcal F^B_a :P(C \cap D) = P(C)P(D) \forall D \in B$. That is, $\mathcal G$ is the set of all sets in $\mathcal F^B_a$ which are independent to those sets in $B$.
Note that $A \in \mathcal G$. This condition was violated in my last answer, so it was incorrect, and I had to go this way.
We know that $A$ is a $\pi$-system : a $\pi$-system is basically a set of sets which is closed under finite intersection (or just intersection), which we already showed that $A$ is. Now it is enough to show that $\mathcal G$ is a $\lambda$ system (also called a Dynkin system), since if this is true then $\mathcal G$ contains the smallest $\sigma$-algebra containing $A$, by the theorem, but this is equal to $\mathcal F^B_a$.
To show that $\mathcal G$ is a $\lambda$ system, we go by definition. Refer to the second set of conditions here.
Certainly $\Omega \in \mathcal G$.
Suppose $A \in \mathcal G$. Let $D \in \mathcal F^B_{\infty}$. We know that $P(A \cap D) = P(A)P(D)$. But we also know that $P(D) = P(A\cap D) + P(A^c \cap D)$, so using this we get $$ P(A^c \cap D) = P(D) - P(A \cap D) = P(D) - P(A)P(D)=P(D)(1-P(A)) = P(D)P(A^c) $$
so $A^c \in \mathcal G$.
so $\cup_i A_i \in \mathcal G$.
Thus, by the $\pi-\lambda$ theorem, we get that $\mathcal{F}^B_a\subseteq\mathcal{G}$: in plain words, that means that if you are in $\mathcal{F}^B_a$, you are in $\mathcal{G}$, which is, by definition, composed of all elements in $\mathcal{F}^B_a$ which are independent of each element in $\mathcal{F}^W_{\infty}$. That can be summarized by stating that $\mathcal F^B_a$ is independent of $B$. Now, repeat the argument with $B$ replaced by $A$ and $\mathcal F^W_{\infty}$ replaced by $\mathcal F^B_a$ to get the final result.