I find the theorem below on https://math.stackexchange.com/a/1366549/533565. But I cannot find it at neither the textbook I have nor the published paper in google scholar. I want to cite this theorem in my project. Does anybody know where is the theorem from?
A theorem by Markov states that if a sequence of random variables $X_1, X_2, \ldots$ with finite variances fulfills one of conditions:
- $\lim_{n \to \infty} \frac{\mathrm{Var} X_n}{n^2} = 0$;
- $X_1, X_2, \ldots$ are independent and $\lim_{n \to \infty}\frac{1}{n^2}\sum_{i = 0}^n \mathrm{Var} X_i = 0$;
then the sequence $Y_n = \frac{1}{n}\sum_{i=1}^n (X_i - \mathsf{E} X_i)$ converges for $n \to \infty$ to $0$ in probability.
This is false. The first condition is insufficient for convergence. Even if the first condition is augmented by assuming the $\{X_i\}_{i=1}^{\infty}$ variables are mutually independent, it is still insufficient. However, the second condition is sufficient.
Define $Y_n = \frac{1}{n}\sum_{i=1}^n (X_i-E[X_i])$.
Counter-example to first condition.
Let $\{X_n\}_{n=1}^{\infty}$ be mutually independent Gaussian random variables such that $X_n \sim N(0, n)$ for $n \in \{1, 2, 3, ...\}$. Then the first condition holds because $$ \frac{Var(X_n)}{n^2} = \frac{n}{n^2}\rightarrow 0$$ However, $Y_n$ is Gaussian with mean zero and variance $\frac{1}{n^2}\sum_{i=1}^n i$. That is, $$Y_n \sim N\left(0, \frac{n+1}{2n}\right)$$ Thus, $Y_n$ does not converge to zero in probability. The limiting distribution of $Y_n$ is $N(0,1/2)$.
Proof of convergence under the second condition.
We just use the standard Markov/Chebyshev inequality. Fix $\epsilon>0$. Then $$ P[|Y_n-0|\geq \epsilon] = P[Y_n^2 \geq \epsilon^2] \leq ...$$