Independence in functional theoretic view on probability

95 Views Asked by At

I'm reading the book Hilbert Space Methods in Probability and Statistical Inference by Small and McLeish.

There (Def. 3.2.4) independence is defines as follows

Let $\mathbf{B}$ and $\mathbf{C}$ be two sets of random variables. The sets $\mathbf{B}$ and $\mathbf{C}$ are said to be independent if $$\langle \mathbf{x} - \mathbb{E}(\mathbf{x} )\mathbf{1}, \mathbf{y} - \mathbb{E}(\mathbf{y} )\mathbf{1}\rangle =0$$ for all $\mathbf{x} \in ps(B)$ and all $\mathbf{y} \in ps(C)$. In particular, if $\mathbf{B}=\{\mathbf{x}\}$ and $\mathbf{C}=\{\mathbf{y}\}$, then we say that $\mathbf{x}$ and $\mathbf{y}$ are independent. Random variables $\{\mathbf{x}_{\alpha}\}$ are said to be mutually independent if any two disjoint subcollections of the random variables are independent. Similarly events $\{\mathbf{A}_{\alpha}\}$ are said to be mutually independent if their corresponding indicators are mutually independent.

The authors claim, that this definition immediately implies $$P\left( \bigwedge_{i=1}^{n}A_{i} \right)=\prod\limits_{i=1}^{n}P(A_{i}) $$ for mutually independent events $A_{1},A_{2},\ldots, A_{n}$.

While I think the proof for arbitrary $n$ follows easily by induction, I'm stuck in proving the result for $n=2$.

From the definition, two events $A$ and $B$ are mutually independent if: $$\begin{align} \langle \mathbf{1}_{A} - \mathbb{E}(\mathbf{1}_{A} )\mathbf{1}, \mathbf{1}_{B} \mathbb{E}(\mathbf{1}_{B} )\mathbf{1}\rangle &= \langle \mathbf{1}_{A} , \mathbf{1}_{B} \rangle - \mathbb{E}(\mathbf{1}_{B})\langle\mathbf{1}_{A} ,\mathbf{1} \rangle - \mathbb{E}(\mathbf{1}_{A})\langle\mathbf{1}_{B} ,\mathbf{1} \rangle + \mathbb{E}(\mathbf{1}_{A})\mathbb{E}(\mathbf{1}_{B})\langle \mathbf{1} ,\mathbf{1}\rangle\\ &= \langle \mathbf{1}_{A} , \mathbf{1}_{B} \rangle - 2\mathbb{E}(\mathbf{1}_{A})\mathbb{E}(\mathbf{1}_{B}) + \mathbb{E}(\mathbf{1}_{A})\mathbb{E}(\mathbf{1}_{B})\\ &= \langle \mathbf{1}_{A} , \mathbf{1}_{B} \rangle - \mathbb{E}(\mathbf{1}_{A})\mathbb{E}(\mathbf{1}_{B})\\ &= 0 \end{align}$$ Therefore $$\langle \mathbf{1}_{A} , \mathbf{1}_{B} \rangle =\mathbb{E}(\mathbf{1}_{A})\mathbb{E}(\mathbf{1}_{B})$$ implies independence.

Furthermore, also be definition, $P(A):=\mathbb{E}(\mathbf{1}_{A})$. The last equality is therefore equivalent to $$\langle \mathbf{1}_{A} , \mathbf{1}_{B} \rangle = P(\mathbf{A})P(\mathbf{B})$$

However to complete the proof $P(A \bigwedge B)=\langle \mathbf{1}_{A} , \mathbf{1}_{B} \rangle $ must hold. But by definition $$P\left(A \bigwedge B\right)=\mathbb{E}(\mathbf{1}_{A\bigwedge B})=\langle \mathbf{1}_{A}\mathbf{1}_{B},\mathbf{1} \rangle$$.

If we consider the Hilbert Space $L^{2}([0,1])$ this follows from the definition of $\langle f,g\rangle :=\int_{[0,1]}f(x)g(x)dx$ since $$ \langle \mathbf{1}_{A}\mathbf{1}_{B},\mathbf{1}\rangle = \int_{[0,1]}\mathbf{1}_{A}(x)\mathbf{1}_{B}(x)\mathbf{1}_{[0,1]}(x)dx = \int_{[0,1]}\mathbf{1}_{A}(x)\mathbf{1}_{B}(x)(x)dx = \langle \mathbf{1}_{A},\mathbf{1}_{B}\rangle $$

But I do not see why $$\langle\mathbf{1}_{A}\mathbf{1}_{B},\mathbf{1} \rangle=\langle \mathbf{1}_{A} , \mathbf{1}_{B}\rangle$$ should hold in general? I assume I'm missing some property of inner products.