Interpretation Weak Law of Large Numbers as Corollary of a convergence theorem in Hilbertspaces

60 Views Asked by At

I'm trying to solve an excerise in my probability book that states the following

Show that for bounded orthogonal vectors ${ x }_{ 1 }+...+{ x }_{ i }$ in a Hilbert Space H the sequence $\frac { { x }_{ 1 }+...+{ x }_{ n } }{ n } $ converges to zero. Furhter explain in which way the weak law of large numbers (for uncorrelated r.v with finte variance) can be made as an corollary of this statement.

For the first part I showed that the given sequence is chauchy by using Pythagoras, in short We have:

$\left\| \frac { { x }_{ 1 }+...+{ x }_{ n } }{ n } -{ \cfrac { { x }_{ 1 }+...+{ x }_{ m } }{ m } } \right\| \le \sum _{ k=m }^{ i }{ \left\| { x }_{ k } \right\| } $

and by Assumtion we have $\sum _{ k=m }^{ i }{ { \left\| { x }_{ k } \right\| }^{ 2 }\le K } $ so $\sum _{ k=1 }^{ \infty }{ { \left\| { x }_{ k } \right\| }^{ 2 } } $ converges and eventually $\sum _{ k=1 }^{ \infty }{ { { x }_{ k } } } $

I'm stuck at the second part now, how do we get the converges in Probability for $P(|{ X }_{ n }-E(X)|\ge \varepsilon)=0 $ i thought about using that quadratic mean convergences implies convergence in probability. However i don't see how the special settings of the Hilbert Space and the assumtions on the vectors ensure convergence for much more general objects like the Expected Value

1

There are 1 best solutions below

1
On BEST ANSWER

For the first part note that $\|\frac {x_1+x_2+...+x_n} n\|^{2}=\frac 1 {n^{2}}\|x_1\|^{2}+...+\frac 1 {n^{2}} \|x_n\|^{2} \leq C \frac 1 n \to 0$ where $C$ is a bound for $\|x_n\|^{2}$.

For the second part let $\{X_n\}$ be i.i.d. with finite variance. Then $\{X_n-EX_n\}$ is orthogonal in the Hilbert space $L^{2}$ because $E(X_n-EX_n)(X_m-EX_m)=E(X_n-EX_n)E(X_m-EX_m)=0$ for $n \neq m$. Also $E|(X_n-EX_n)^{2}=var(X_1)$ so $E|(X_n-EX_n)^{2}$ is bounded. It follows that $\frac {S_n} n \to 0$ in $L^{2}$ (where $S_n=X_1+X_2+...+X_n)$. But convergence in $L^{2}$ implies convergence in probability so we are done.