Rate of convergence for filtration and conditional expectations

438 Views Asked by At

I've cross-posted this question at Mathoverflow with the hope of generating more ideas.

Let $(\Omega, \mathcal{F},P)$ be a probability space and $(\mathcal{F}_n)_n$ a filtration that increases to $\mathcal{F}$.

Is there a way to quantify the "rate of convergence" of $\mathcal{F}_n \uparrow \mathcal{F}?$

I'll now try to clarify the question by explaining its motivation. I was wondering what can be said about the rate of convergence in Levy's martingale convergence theorem. By that result, for an integrable random variable $X$ we have $$E(X \mid \mathcal{F}_n) \to X$$ almost surely. I was wondering if we can make a statement like $$P(|E(X \mid \mathcal{F}_n) - X| > \epsilon) = O(n^{-a})$$ under fairly general assumptions.

As pointed out to me by Bananach in this question, it's hopeless to expect the rate to be independent of the particular filtration because we can replace $\mathcal{F}_n$ with $$\bar{\mathcal{F}_n} = \mathcal{F}_{\sqrt{n}}$$ (rounding to the nearest integer), and obtain a new filtration for which conditional expectations converge more slowly.

But perhaps if we knew "how quickly" $\mathcal{F}_n$ increases to $\mathcal{F}$, we could find a rate of convergence of conditional expectations that depends on the "rate of convergence" of the filtration.

An example is given in Michael's answer to the same question that I linked to above. Let $X$ be uniformly distributed on $[-1,1]$, assume $\mathcal{F} = \sigma(X)$, and define $$Z_n = X \mathbf{1}_{|X|>2^{-n}} \ \ \text{and} \ \ \mathcal{F}_n = \sigma(Z_n,...,Z_1).$$ Then, $$E(X \mid \mathcal{F}_n) = X \mathbf{1}_{|X|>2^{-n}} \to X,$$ and $$P(|E(X \mid \mathcal{F}_n) - X)| > \epsilon) = 2^{-n}.$$

Another, trivial, example is $\mathcal{F} = \sigma(X)$ and $\mathcal{F}_n = \sigma(X)$ for all $n$, and then $$E(X \mid \mathcal{F}_n) = X.$$ The filtration converges "instantly" and so do the conditional expectations.

The examples suggest that the rate of convergence of the conditional expectations is the same as the "rate of convergence" of the filtration. Can this idea be made precise in general?

1

There are 1 best solutions below

5
On

I believe there are, theoretically, several ways to quantify the rate of convergence of $\mathcal{F}_n$. Here is my suggestion. Disclaimer: It is a very crude, probably naive, approach, but maybe someone may find creative ideas by reading this.

Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. The idea is to construct a sequence of non-negative numbers $(a_n)_{n \in \mathbb{N}}$ that goes to zero when $\mathcal{F}_n \rightarrow \mathcal{F}$, as $n \rightarrow +\infty$. Then, the rate of convergence $R(n) = R_n(\mathcal{F}_n \rightarrow \mathcal{F})$ of the filtration will be understood as the speed of convergence of $(a_n)$, which is, at least in principle, an easier problem. We establish the convention that if $a_n=0$ for all $n$, then $R(n) = +\infty$ (instant convergence).

Following the idea of Davide Giraudo, define, for any $F \in \mathcal{F}$, $$ M_{n}(F) = \inf_{G \in \mathcal{F}_n} \mathbb{P}(F \Delta G) $$ This acts as the baseline. Some observations: $M_n(\Omega) = M_n(\emptyset) = 0$ (we must take care of this), if $\mathcal{F}_n = \mathcal{F}$ for all $n$, then $M_n(F)=0$ for all $F \in \mathcal{F}$. Now, let $L_n$ and $U_n$ be defined as $$ L_n = \inf_{ F \in \mathcal{F}, \mathbb{P}(F) \neq 0 } \{ M_n(F) \}, \quad U_n = \sup_{ F \in \mathcal{F} } \{ M_n(F) \} $$ which is like considering the "worst-case" (minimal difference, $L_n$) scenario and the "best-case" (maximal difference, $U_n$) scenario, where we intentionally excluded the trivial cases , e.g. $\emptyset$. Observe that, if a particular set $\bar{F} \in \mathcal{F}$ attains $U_n$, then all the other sets $F \in \mathcal{F}$ satisfy $ 0 \leq M_n(F) \leq M_n(\bar{F})$. Finally, let $$ a_n = \frac{L_n + U_n}{2} $$ which acts like an "average difference". Let us check that it works when $\mathcal{F}_n = \mathcal{F}$. So, $M_n(F)=0$, $L_n = U_n = 0$ and also $a_n = 0$. Let us now check that if $\mathcal{F}_n$ doesn't actually converge to $\mathcal{F}$, $a_n$ doesn't go to zero. It is enough to consider the limit $n \rightarrow +\infty$, so $L_n \rightarrow L$ and $U_n \rightarrow U$. Reason by contradiction. $a_n = 0$ implies $L=U=0$. But $U=0$ would imply that $M_{+\infty}(\bar{F})=0$ for the set $\bar{F}$ which maximizes the difference, so all the other sets also have $M_{+\infty}(F)=0$ (by previous property), which implies $\mathcal{F}_{+\infty} = \mathcal{F}$ ("almost surely"), which is a contradiction, so $U \neq 0$ and $a_n$ can't go to zero.

I am sure my answer could be improved and probably I was not mathematically precise enough and there may be some oversights, this is just to generate ideas for a very interesting question.


EDIT: To be clear, the general procedure I am suggesting is the following $$ \mathcal{F}_n \color{red}{\rightarrow} a_n \rightarrow R(n) \color{red}{\rightarrow} E(X|\mathcal{F}_n) $$ Start with $\mathcal{F}_n$. From that, you generate a sequence $a_n$. You compute $R(n)$ from $a_n$ using the concepts given in the Wikipedia page, which go from the simple $R(n) = a_{n+1}/a_n$ up to more elaborate things. Or just ask a mathematician. After that, ideally, you need a general formula linking $R(n)$ to $E(X|\mathcal{F}_n) \rightarrow X$. This is an independent problem, which can be solved just assuming to start with $R(n)$. The red arrows denote the (hard) parts where great care must be taken (in order to avoid trivial results).