How to show that $P(X_{n1}+X_{n2}+\dots+X_{nn} \neq 0) \leq 1/n$?

96 Views Asked by At

The question is :

Suppose that for each positive integer $n$, $X_{n1},X_{n2},\dots,X_{nn}$ are independent, identically distributed random variables taking only the values $-\sqrt{n}$, $0$, $\sqrt{n}$, such that \begin{equation*} P(X_{n1}= -\sqrt{n})= P(X_{n1}= \sqrt{n})=\frac{1}{2n^2}, P(X_{n1}=0)= 1- \frac{1}{n^2}. \end{equation*}
i) Show that $P(X_{n1}+X_{n2}+\dots+X_{nn} \neq 0) \leq 1/n$?

and use this to show that $X_{n1}+X_{n2}+\dots+X_{nn} \Rightarrow \delta_0 $ as $n \rightarrow \infty$, where $\delta_0$ is degenerate probability measure defined by $\delta_0(\{0\})=1$ and $\delta_0(\{R-0\})=0$.

I tried to solve this by using the Lindeberg central limit theorem and I have shown that $E(X_{nk})=0$, $\operatorname{Var}(X_{nk})=1/n$ and $\operatorname{Var}\left(X_{n1}+X_{n2}+\dots+X_{nn}\right)=1$.

Can anyone give me some hints to solve this question?

3

There are 3 best solutions below

0
On

Note that $X_{n1}+\dots+X_{nn}\neq 0$ happens if and only if $\lvert X_{n1}+\dots+X_{nn}\rvert \geq \sqrt{n}$, as each $X_{ni}$ takes discrete values. This way, we can use Chebyshev's inequaility to bound $$ \mathbb{P}(\lvert X_{n1}+\dots+X_{nn}\rvert \geq \sqrt{n}) \leq \frac{\operatorname{Var}(X_{n1}+\dots+X_{nn})}{(\sqrt{n})^2} = \frac{n\operatorname{Var}(X_{n1})}{n} = \frac{1}{n}. $$

0
On

The sum $\sum_i X_{ni}$ is nonzero if and only if $(\sum_i X_{ni})^2\geq n$ so by Markov's inequality,

$$P\left(\left(\sum_i X_{ni}\right)^2\geq n \right)\leq \frac{1}{n}E\left[\left(\sum_i X_{ni}\right)^2\right]=1/n.$$

1
On

This result doesn't even require the $X_{ni}$ to be independent, nor do they have to be identically distributed. It suffices to have $\sum_{i=1}^n P(X_{ni} \neq 0) \to 0$. In what follows, I won't use the assumptions of independence or identical distribution, but for ease of reading, I will use the given probability $P(X_{ni} \neq 0) = \frac1{n^2}$. You should be able to see how to change it to hold as long as $\sum_{i=1}^nP(X_{ni}\neq 0) \to 0$ as $n\to \infty$.


First, note that if $\sum_{i=1}^n X_{ni} \neq 0$, then at least one of the terms is nonzero. Therefore, by subadditivity of the probability $P$, $$P\left(\sum_{i=1}^nX_{ni} \neq 0\right) \leq \sum_{i=1}^nP\left(X_{ni}\neq 0\right) = \sum_{i=1}^n\left(1 - \left(1 - \frac1{n^2}\right)\right) = \frac{1}{n}$$

It now follows that for $x < 0$, $$P\left(\sum_{i=1}^nX_{ni} \leq x\right) \leq P\left(\sum_{i=1}^nX_{ni} \neq 0\right) \leq \frac1n \to 0$$ and for $x > 0$, (actually for $x \geq 0$, but this is stronger than required) $$P\left(\sum_{i=1}^nX_{ni} \leq x\right) \geq P\left(\sum_{i=1}^nX_{ni} = 0\right) = 1 - P\left(\sum_{i=1}^nX_{ni} \neq 0\right) \geq 1 - \frac1n \to 1$$