Is working modulo null sets really insufficient for studying stochastic processes?

144 Views Asked by At

The "measure algebra" approach to probability (which uses the quotient of the Lebesgue algebra, or even just the first couple of levels of the Borel algebra, by its null sets) appears to have many advantages, some of which are discussed on the Encyclopedia of Mathematics wiki. As a reason for its not being more mainstream, that article quotes "Probability with Martingales" by David Williams, who on page xiii says:

I hope that this book will tempt you to progress to the much more interesting, and more important, theory where the parameter set of our process is uncountable (e.g. it may be the time-parameter set [0,∞)). There, the equivalence-class formulation just will not work: the 'cleverness' of introducing quotient spaces loses the subtlety which is essential even for formulating the fundamental results on existence of continuous modifications, etc., unless one performs contortions which are hardly elegant. Even if these contortions allow one to formulate results, one would still have to use genuine functions to prove them; so where does the reality lie?!

The measure algebra approach seems to me motivated by the intuitive principle that events of measure zero "shouldn't matter". Does this fail in some situations? Is there not, despite what Williams says, any way to work around such difficulties which salvages the principle?

1

There are 1 best solutions below

2
On

The point that Williams is making is that the measure algebra approach is fine if you are studying just one measure and $\sigma$-algebra. But, in the study of stochastic processes, you end up studying uncountable collections of measures and $\sigma$-algebras, indexed by the interval $[0,\infty)$. There is also the usual Lebesgue measure on $[0,\infty)$.

Some of the classical theorems on stochastic processes are about the interplay between these measures and functions from $[0,\infty)$ to $\mathbb{R}$. These results refer to various notions of continuity of these functions.

Now, if you mod out by measure zero subsets of the parameter/time space $[0,\infty)$, you have to talk about equivalence classes of functions. This adds an extra layer to all the definitions and proofs, which Williams is saying is not elegant.

As a trivial example, working with equivalence classes of functions makes stating "continuous everywhere" impossible, because that notion is not respected by the equivalence relation of almost-everywhere equality.

Also, Williams says that the proofs in the area generally use actual functions, rather than equivalence classes of them, reducing the benefit of starting with an equivalence class. The equivalence class method is most useful when you don't have to "pierce the bubble" and take representatives in every proof.