Second Borel Cantelli lemma for martingales

565 Views Asked by At

I have a question in respect to conditional probability (which I think has to do with me not properly understanding conditional expectation). So first I try to explain what I know, please correct anything that is not clear: Let as assume we have the r.v. X, mapping x to x, $\Omega = [0,1)$, the $\sigma$-algebra are the Borelians and the probability is the uniform distribution. Then, the event $\{X < a\}$ is measurable. If we partition [0,1) in four sets and consider its corresponding $\sigma$-algebra, say $\sigma\{[0,1/4), [1/4, 1/2), [1/2, 3/4), [3/4, 1)\}$, then $E(X|\sigma\{[0,1/4), [1/4, 1/2), [1/2, 3/4), [3/4, 1)\})$ can just take four values (otherwise it is not measurable). In the Durrett, he writes that informally, we may think of this new $\sigma$-algebra as being the information we have at our disposal. Does that mean that we are not allowed to run the initial experiment anymore (where an outcome can lie between $0$ and $1$), but we only get the information in which of the four parts it has landed and the conditional expectation tells us our best guess? So in the end, $E(X|\sigma\{[0,1/4), [1/4, 1/2), [1/2, 3/4), [3/4, 1)\})$ is still a random variable (the name expectation is a bit confusing). In the special case where we condition in respect to $[0,1)$, we just get a constant r.v. with the expectation of $X$.

My question which motivates this post is the Second Borel-Cantelli lemma II (found in Durrett, p. 205): if $\mathcal{F}_n$, $n\geq 1$, is a filtration with $\mathcal{F}_0 = \{\varnothing, \Omega\}$ and a sequence of events $A_n \in \mathcal{F}_n$. Then

$\{A_n\ \text{ i.o.} \} = \{\sum_{n \geq 1}P(A_n|\mathcal{F}_{n-1}) = \infty \}$

Here I have to interpret $\sum_{n \geq 1}P(A_n|\mathcal{F}_{n-1})$ as a random variable? Is there an intuitive way to see why this must hold?

1

There are 1 best solutions below

8
On BEST ANSWER

The statement in Durrett is not very neat. These events are not equal in general. The real statement is (should be) that their difference has probability zero.

Also, when you ask for an intuitive explanation, you should be prepared that intution often fails with the probability theory, especially when one speaks of some infinite things. Now, having warned you, I'll try to give an explanation (not sure if it's very intuitive, but can't think of anything else).

Imagine that you're proposed to take part in a daily lottery, which pays a dollar if you win. To play, you should everyday buy a ticket for the next day's drawing. Two things are known:

  • the probability to win next day depends on the previous outcomes;
  • the lottery is fair in the sense that the price you pay is equal to the expected gain, i.e. to the aforementioned probability.

Since the lottery is fair, it should be intuitively clear that you will pay an infinite price (having participated in this game indefinitely) if and only if your gain is infinite; denoting $A_n = \{n\text{th day win}\}$, this is precisely the statement you are asking about. Moreover, in the case where you pay this infinite price, it is also intuitively clear from the fair-game assumption that the total price paid/total gain ratio should converge to $1$, and here the intuition does not fail you either: for almost all $\omega \in\{\sum_{n=1}^\infty P(A_n\mid \mathcal F_{n-1})=\infty\}$, $$ \frac{\sum_{k=1}^n P(A_k\mid \mathcal F_{k-1})}{\sum_{k=1}^n \mathbf{1}_{A_k}}\to 1,\quad n\to\infty. $$