Understanding Filtrations

169 Views Asked by At

I can see that a filtration on a probability space $\{\Omega, \mathcal{F}, P\}$is defined thus: An increasing collection of $\sigma$-fields, $\mathcal{F_0} \subseteq \mathcal{F_1} \subseteq \mathcal{F_n} ... \subseteq \mathcal{F}$

This means that it is a collection of $\sigma$-fields that tops out at $\mathbb{P}(\Omega)$, which is the largest element of this collection.

The text(s) then usually define what it means for a sequence of random variables to be adapted to a filtration, and suggest that this is a way to model knowledge at some time $n$.

This is what I find confusing; for a set of finite outcomes, there is clearly a limit on the largest possible $\sigma$-algebra. How can the filtration model a potentially infinite time series?

For instance, if I have two outcomes- $H$ and $T$ - the largest $\sigma$ alegebra I can possibly have is $\mathcal{F} = \{\emptyset, \{H\},\{T\},\{H,T\}\}$

If the information I am trying to represent is $HHHTTTHHH$, how would my filtration represent it differently to $TTTHHHTTT$? My understanding is that the filtration at the end of either of these runs of data will sequence will remain a collection of sub-$\sigma$-algebras of $\mathcal{F}$, which provides me no information about how to distinguish them, or indeed represent any data stream in a nontrivial way that actually captures the order of arrival.


I've had a look at other questions, particularly this one: Filtration and measure change. However, I think I'm at a more basic level right now.

I've asked a previous question on this, but it was very, very vague- I will delete it.