Show that $\frac{1}{n} \sum \limits_{k=1}^{n} X_{k}-\frac{1}{n} \sum \limits_{k=1}^{n} \mathbb{E} X_{k} \stackrel{\text { f.s. }}{\longrightarrow} 0$

80 Views Asked by At

Let $ \left(X_{n}\right)_{n \in \mathbb{N}} \subset \mathcal{L}^{2} $ be a sequence of independent random variables. Furthermore, let $ \alpha<1 $ and $ c \geq 0 $ be such that $ \sum \limits_{i=1}^{k} \operatorname{Var}\left(X_{i}\right) \leq c k^{\alpha} \text { for all } k \in \mathbb{N} $. Show that $ \frac{1}{n} \sum \limits_{k=1}^{n} X_{k}-\frac{1}{n} \sum \limits_{k=1}^{n} \mathbb{E} X_{k} \stackrel{\text { f.s. }}{\longrightarrow} 0 \text { as } n \rightarrow \infty \text {. } $

Attempt/Idea:

To prove this, we can use the Strong Law of Large Numbers (SLLN). The SLLN states that if $ \left(Y_n\right)_{n \in \mathbb{N}} $ is a sequence of independent and identically distributed random variables with finite mean $ \mu $, then the sample average $ \frac{1}{n} \sum \limits_{k=1}^{n} Y_k $ converges almost surely to $ \mu $ as $ n $ approaches infinity.

In our case, we have a sequence $ \left(X_{n}\right)_{n \in \mathbb{N}} $ of independent random variables, but they may not be identically distributed. However, we can make use of the fact that their variances satisfy $ \sum \limits_{i=1}^{k} \operatorname{Var}\left(X_{i}\right) \leq c k^{\alpha} $.

By dividing both sides of the inequality by $ k $, we have $ \frac{1}{k} \sum \limits_{i=1}^{k} \operatorname{Var}\left(X_{i}\right) \leq c k^{\alpha-1} $. Now, let's define $ Y_k = \frac{X_k - \mathbb{E} X_k}{k} $. We can see that $ \mathbb{E} Y_k = \frac{\mathbb{E} X_k - \mathbb{E} X_k}{k} = 0 $.

Next, we calculate the variance of $ Y_k $: $$ \operatorname{Var}(Y_k) = \operatorname{Var}\left(\frac{X_k - \mathbb{E} X_k}{k}\right) = \frac{1}{k^2} \operatorname{Var}(X_k) \leq \frac{c}{k^{2-\alpha}} $$

Since $ \alpha < 1 $, the exponent $ 2 - \alpha > 1 $. Therefore, $ \sum \limits_{k=1}^{\infty} \frac{c}{k^{2-\alpha}} $ converges, and by the SLLN, we have $ \frac{1}{n} \sum \limits_{k=1}^{n} Y_k \stackrel{\text { a.s. }}{\longrightarrow} 0 $ as $ n \rightarrow \infty $.

Finally, let's rewrite the expression we want to prove: $$ \frac{1}{n} \sum \limits_{k=1}^{n} X_{k}-\frac{1}{ n} \sum \limits_{k=1}^{n} \mathbb{E} X_{k} = \frac{1}{n} \sum \limits_{k=1}^{n} k \cdot Y_k $$

Since $ \frac{1}{n} \sum \limits_{k=1}^{n} k \cdot Y_k $ converges almost surely to 0, we can conclude that $ \frac{1}{n} \sum \limits_{k=1}^{n} X_{k}-\frac{1}{n} \sum \limits_{k=1}^{n} \mathbb{E} X_{k} \stackrel{\text { f.s. }}{\longrightarrow} 0 $ as $ n \rightarrow \infty $.

1

There are 1 best solutions below

1
On BEST ANSWER

This should just be a relatively straightforward application of Borel Cantelli. Simply note that $$\mathbb{P}(|\frac{1}{k}\sum_{i=1}^kX_i-\frac{1}{k}\sum_{i=1}^k\mathbb{E}(X_i)|\geq \lambda) \leq \lambda^{-2}Var(\frac{1}{k}\sum_{i=1}^kX_i) \leq \lambda^{-2}k^{-2}\sum_{i=1}^kVar(X_i) \leq C\lambda^{-2}k^{\alpha-2}$$ This is summable because $\alpha<1$ so the result follows since $\lambda$ was arbitrary.