Showing a random variable converges to $\mu$ in probability

70 Views Asked by At

So, I wasn't too sure about how to approach this problem. First, I computed $\mathbb{E}[Y_{n}]$ as follows:

$$\mathbb{E}\left[\frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}\right] = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} \mathbb{E}[iX_{i}] $$

$$= \frac{2}{n(n + 1)} \left(1(\mu) + 2(\mu) + 3(\mu) + \cdots n(\mu) \right) $$

$$= \frac{2}{n(n + 1)} \cdot \mu\left(\frac{n(n + 1)}{2}\right) $$

$$= \mu.$$

Then, I computed the variance. First compute the second moment:

$$\mathbb{E}\left[Y_{n}^{2}\right] = \frac{4}{n^{2}(n + 1)^{2}} \sum_{i = 1}^{n} \mathbb{E}[i^{2} X_{i}^{2}] = \frac{4}{n^{2}(n + 1)^{2}} \cdot \left(1^2 \mu^2 + 2^2 \mu^2 + \cdots n^2 \mu^2 \right) $$

$$= \mu^{2},$$

which means that $\text{Var}(Y_{N}) = 0.$

I don't know if I actually computed the variance right. I also don't know what to do next. Any help would be appreciated. In particular, I think that Chebyshev's Inequality states that

$$P(|Y_{N} - \mu| \geq \epsilon\} \leq \frac{\sigma^{2}}{k^{2}} = 0,$$

but since probability cannot be negative, we must have it equal to $0$? I don't really know.

1

There are 1 best solutions below

5
On BEST ANSWER

The variance computation is incorrect. (Your computation would imply that $Y_n$ is constant with probability one). Since the $X_i$ are independent we have that $$ \text{Var}(Y_n)=\frac{4}{n^2(n+1)^2}\sum_{i=1}^ni^2\text{Var}(X_i)=\frac{4\sigma^2}{n^2(n+1)^2}\sum_{i=1}^ni^2=\frac{4\sigma^2}{n^2(n+1)^2}\frac{n(n+1)(2n+1)}{6} $$ where we used the fact that the $X_i$ are identically distributed (so equal variance) in the second equality. So $$ \text{Var}(Y_n)=\frac{4(2n+1)\sigma^2}{6n(n+1)}. $$ Now apply chebesev's inequality to deduce that given $\varepsilon>0$ $$ P(|Y_n-\mu|>\varepsilon)\leq \frac{\text{Var}(Y_n)}{\varepsilon^2}\to 0 $$ as $n\to \infty$ since $\text{Var}(Y_n)\sim \frac{8}{6n}$ (as $n\to \infty$) as desired.

As an appendix we show that in general if $W_i$ are independent (or uncorrelated) we have that $\text{Var}(\sum_{i=1}^n W_i)=\sum_{i=1}^n \text{Var}W_i$. To see this suppose that $W_i$ has zero mean for all $i$ WLOG. Then $$\text{Var}(\sum_{i=1}^n W_i)=E(\sum_{i=1}^nW_i)^2=E\sum_{i}\sum_j W_iW_j=\sum_{i}\sum_jEW_iW_j=\sum_{i=1}^n\text{Var}(W_i)$$ since $EW_iW_j=\delta_{ij}\text{Var}(W_i)$ (by independence (or uncorrelatedness)).

If $W_i$ does not have zero mean consider $Y_i=W_i-\mu_i$ (where $\mu_i=EW_i$) apply the previous result to $Y_i$ and use the fact that $\text{Var}(X+c)=\text{Var}(X)$ in general for $c\in\mathbb{R}$.