A.s. convergence of sum

96 Views Asked by At

Let $A_1,A_2,...,B_1,B_2,...,C_1,C_2,...$ be independent random variables with $$ EA_n=0, \qquad EA_n^2=1, \qquad EB_n=0, \qquad EB_n^2=n^2 $$ and let $C_n$ be Bernoulli distributed with $P(C_n=1)=\frac{1}{n^2}$. Assume that $(A_n)_{n\geq1}$ is an IID sequence. I've shown that $$ \frac{1}{n}\sum_{i=1}^{n}(A_i+B_iC_i) \xrightarrow{a.s}E(A_1+B_1C_1)=0 $$ and that $\sum_{i=1}^{n}B_iC_i$ converges almost surely to some limit. However I have now been tasked with showing almost sure convergence of $ \frac{1}{n}\sum_{i=1}^{n}(A_i+B_iC_i)^2 $ Furthermore I should conclude if it converges to $E((A_1+B_1C_1)^2)$

To tackle this problem I first split the sum like so: $$ \frac{1}{n}\sum_{i=1}^{n}(A_i+B_iC_i)^2=\frac{1}{n}\sum_{i=1}^{n}A_i^2+\frac{1}{n}\sum_{j=1}^{n}(B_jC_j)^2+\frac{2}{n}\sum_{k=1}^{n}A_k B_kC_k $$ The first term converges to 1 by the strong law of large numbers, I believe that I have shown that the sums in the second term converges almost surely, since $$ \sum_{j=1}^{\infty}P((B_jC_j)^2>0)\leq \sum_{j=1}^{\infty}P(C_j=1)=\sum_{j=1}^{\infty}\frac{1}{n^2}<\infty $$ implies that the terms in $\sum_{j=1}^{n}(B_jC_j)^2$ will be $0$ eventually, thus the sum will have a finite limit. Does this mean, that $\frac{1}{n}\sum_{j=1}^{n}(B_jC_j)^2$ converges almost surely to $0$? I can apply almost the same argument to $\frac{2}{n}\sum_{k=1}^{n}A_k B_kC_k$, and end up with the same result. However this would mean, that $$ \frac{1}{n}\sum_{i=1}^{n}(A_i+B_iC_i)^2 \xrightarrow{a.s} 1 \neq 2=E((A_1+B_1C_1)^2) $$ Am I correct in my reasoning?

1

There are 1 best solutions below

0
On BEST ANSWER

This is all correct. The main step as you noted is showing that with probability $1$, only finitely many $C_i$ are non-zero. Thus for any sequence $\{X_i\}$, we have $\sum_{i=1}^nX_iC_i$ is almost surely eventually constant, and at any such sample point we obviously have $\frac1n\sum_{i=1}^nX_iC_i\to0$

A couple of things are worth pointing out. That a sequence of independent random variables $\{Z_i\}$ satisfies $\frac1n\sum_{i=1}^nZ_i\to c$ with $c\neq E(Z_1)$ is not particularly surprising - take $\{Z_i\}_{i\ge2}$ iid and $E(Z_1)=E(Z_2)+1$, for instance - but what makes the case $Z_i=(A_i+B_iC_i)^2$ interesting is that $E(Z_i)=E(Z_1)$ for all $i$, so the sequence of averages has constant mean and yet converges to a different constant. This is an example of the more general property that $X_n\to X$ a.s. does not imply $E(X_n)\to E(X)$.

If you want to know why it fails in this case, notice here that the statement "with probability $1$, only finitely many $C_i$ are non-zero" inherently involves looking at all the $C_i$ at once, whereas taking the mean one-by-one will never be able to see this information. So, with probability $1$, there is some constant $M$ such that $$\frac1n\sum_{i=1}^n(A_i+B_iC_i)^2=\frac1n\sum_{i=1}^nA^2_i+\frac{M}n$$ for $n$ large enough, and so the limit will only depend on $\{A_i\}$. In fact, a similar statement is true if you remove the square, so it is actually the first case (i.e. $\frac1n\sum_{i=1}^n(A_i+B_iC_i)\to E(A_1+B_1C_1)$) that is exceptional in this situation - if $E(B_i)\neq0$, this would no longer be true.