David Williams "Probability with Martingales" Exercise 4.1

1.4k Views Asked by At

Let me preface this by saying that basically the same question has been asked before on the StackExchange. However, there is one small detail in an exercise that I cannot reconcile.

The following question is Exercise 4.1 in "Probability with Martingales" by David Williams:


Let $(\Omega, \mathcal{F}, P)$ be a probability triple. Let $\mathcal{I}_1$, $\mathcal{I}_2$, and $\mathcal{I}_3$ be three $\pi$-systems on $\Omega$ such that for $k=1,2,3$, $$\mathcal{I}_K\subseteq\mathcal{F}\quad \text{and} \quad\Omega\in \mathcal{I}_k.$$ Prove that if $$P(I_1\cap I_2\cap I_3) = P(I_1) \cdot P(I_2) \cdot P(I_3)$$ whenever $I_k\in \mathcal{I}_k$ (k=1,2,3), then $\sigma(\mathcal{I}_1)$, $\sigma(\mathcal{I}_2)$, and $\sigma(\mathcal{I}_3)$ are independent. Why did we require the $\Omega\in\mathcal{I}_k$?


It seems to me that one can simply mock the proof directly from the Lemma on page 39 in his book where $k=2$. I believe I successfully did the proof. However, in that Lemma, there was not the assumption that $\Omega\in\mathcal{I}_k$ for $k=1,2$. So I am confused by his assumption in E4.1, and question as to "Why did we require the $\Omega\in\mathcal{I}_k$?" Is this simply a callously worded exercise where the answer is "We did need this assumption." Or am I in fact missing something?


The proof I have is as follows (mimicking a previous lemma in Williams):

Fix $I_1\in\mathcal{I}_1$ and $I_2\in\mathcal{I}_2$. Consider the maps $$ J_3 \mapsto P(I_1\cap I_2\cap J_3) \quad \text{and}\quad J_3\mapsto P(I_1)\cdot P(I_2)\cdot P(J_3),$$ for $J_3\in\sigma(\mathcal{I}_3)$. One can verify that these are measures on the measure space $(\Omega, \sigma(\mathcal{I}_3))$, that they both have a total mass of...

OH WAIT! Just as I was typing the "..." above I realized something. I think the technical difficulty here is because William's earlier Lemma used a result that if two finite measures agree on a $\pi$-system and have the SAME total mass, then they agree on the $\sigma$-algebra generated by the $\pi$-system.

Therefore, the technical issue is that the total mass of the two measures above are $$P(I_1\cap I_2)\quad \text{and}\quad P(I_1)\cdot P(I_2),$$ respectively. If $\Omega\not\in \mathcal{I}_3$, it is not necessarily the case that the above two numbers equal (since the condition in the problem is specifically for all three $\pi$-systems at once). Using this approach by creating measures two more times, the same issue will come up, requiring that $\Omega\in\mathcal{I}_k$ for $k=1,2$.


This begs the following question...is the requirement that $\Omega\in \mathcal{I}_k$ only an issue for this particular proof? The previous question here does not require this for the same question. However, the proof there made use of the $\pi-\lambda$ theorem, which was not directly used in the earlier Lemma in Williams (for the $k=2$ case; although, recall that there they actually did not need the requirement that $\Omega\in\mathcal{I}_k$ for $k=1,2$ for that proof).

2

There are 2 best solutions below

0
On BEST ANSWER

Regarding your new question: The hypothesis $\Omega\in{\cal I}_k$ for each $k$ is necessary when we're talking about independence of sigma-algebras generated from more than two $\pi$-systems, for the reason that you discovered. (The other question that you linked should have required the hypothesis. Note that the accepted answer does assume that $\Omega$ is among the sets for which the independence holds.)

When there are two $\pi$-systems, then the assertion $P(\Omega\cap H)=P(\Omega)P(H)$ holds vacuously, so including the hypothesis would be unnecessary.

If you look closely at the proof of Lemma 1.6 in Williams' Appendix A, you'll see that the requirement $\mu_1(\Omega)=\mu_2(\Omega)$ is necessary before we can apply Dynkin's lemma.

1
On

What if $\mathcal I_3$ consists of just the empty set. Then hypothesis holds for any $\mathcal I_1$ and $\mathcal I_2$ ; $\mathcal I_1$ and $\mathcal I_2$ need not be independent.