Adapted and backward adapted?

138 Views Asked by At

I understand the following: Consider a probability space $(\Omega, \mathcal{A},P)$ and a Brownian motion $B=\{B_t, t\in [0,1]\}$ on this space and denote $\mathcal{F}:=(\mathcal{F}_t)_{t\in [0,1]}$ the natural filtration augmented by all $P$-null sets.

A stochastic process $u=\{u(t), t\in [0,1]\}$ is said to be adapted to the filtration $\mathcal{F}$ if $u(t)$ is $\mathcal{F}_t$-measurable for each $t\in [0,1]$, meaning that, for each $\in [0,1]$ the random variable $u(t,\cdot):\Omega\rightarrow \mathbb R$ is a $(\mathcal{F}_t,\mathcal{B}(\mathbb R))$-measurable function. I undrstand this as, being able to have full knowledge of $u(t)$ at time $t$ by means of the information $\mathcal{F}_t$, in other Words, there is a measurable function $f$ such that $u(t)=f(B_{\leq t})$, i.e. $u$ is a functional of the Brownian path upto time $t$. For instance $u(t) = \int_0^t B_s ds$ is $\mathcal{F}_t$-measurable for any given $t\in [0,1]$, it only requires to use the information of $B$ upto time $t$.

Now I wonder what backward adapted is, or backward filtration? Is it the same idea? You have a process which only uses the future information, i.e. inticipative? For instance $u(t) = \int_t^1 B_s ds$ would be backward adapted because it uses information from $\mathcal{F}_s$, $t\leq s\leq 1$?

We know that $\mathcal{F}_0$-measurable r.v.'s are essentially constants. Would that this mean that the random variable $u(1)=\int_0^1 B_s ds$ is backward measurable? because it is like starting from the end?

As an example, would for instance $u(t) = \frac{\int_t^1 B_s dB_s}{\int_0^1 B_s ds}$, $t\in [0,1]$ be backward adapted? If not, why?

I'm a bit lost in this matter, I really thank you much for your kind help!

1

There are 1 best solutions below

0
On BEST ANSWER

If $\mathcal{F} \subseteq \mathcal{A}$ is a sub-$\sigma$-algebra, then we can interpret $\mathcal{F}$ as our pool of information. For example for a stochastic process $(X_t)_{t \geq 0}$ the canonical filtration is given by

$$\mathcal{F}_t := \sigma(X_s; s \leq t),$$

i.e. $\mathcal{F}_t$ contains the information about the process up to time $t$ ("the past"). In particular, $\mathcal{F}_s \subseteq \mathcal{F}_t$ for $s \leq t$ which means that we gather more and more information when time increases. If we assume that $(X_t)_{t \geq 0}$ is a martingale, then

$$X_s = \mathbb{E}(X_t \mid \mathcal{F}_s), \qquad s \leq t,$$

which shows that we can reconstruct the past of the process if we know the current position.

For backward filtrations it is exactly the other way round: A backward filtration $(\mathcal{F}_t)_{t \geq 0}$ satisfies $\mathcal{F}_s \supseteq \mathcal{F}_t$ for $s \leq t$. Roughly speaking, this means that we loose information when time passes. To illustrate this, let us just consider backward martingales. A stochastic process $(X_t)_{t \geq 0}$ is called a backward martingale if

$$\mathbb{E}(X_t \mid \mathcal{F}_s) = X_s$$

for all $s \geq t$ (note that, in contrast to the martingale case, $s \geq t$ and not $s \leq t$). If we set $t=0$, then we find

$$\mathbb{E}(X_0 \mid \mathcal{F}_s) = X_s.$$

Consequently, we can predict at time $t=0$ the whole process (recall that $\mathcal{F}_0 \supseteq \mathcal{F}_s$, so if we interpret the $\sigma$-algebras as information, then we really know $\mathcal{F}_s$ at $t=0$.) In contrast to the martingale case, we cannot reconstruct the past of our process, but predict the future.

Let me finally remark that you always have to be careful about the filtrations. The question "Is this process backward measurable?" doesn't make sense if you do not specify the (backward) filtration. So, for example, if we set

$$\mathcal{G}_t := \sigma(B_s; t \leq s \leq 1),$$

then $(\mathcal{G}_t)_{t \in [0,1]}$ defines a backward filtration and $$X_t := \int_t^1 B(s) \, ds$$ is backwards measurable with respect to $(\mathcal{G}_t)_{t \geq 0}$.