I am currently studying the weak law of large numbers and I have understood the concept behind it. If we take a sample that is enoughly big, the mean of this sample will converge to the sequence of random variables X1,...,Xn that would be very convenient because most of the time. The sequence of random variables called Population is too big to be observed.
My problem is : I cannot find a proper example, problem that uses this theorem. All I found is the proof of it which is not required at this time of my course because my professor has not introduced the Chebyshev inequality...
Could you post here small problemss that I could work on to grasp and practice the mechanic behind the concept ?
Thank you very much
Let's take a fair coin and set $H=1$ and $T=0$
the random variable is the following
$$X= \begin{cases} \frac{1}{2}, & \text{if $X=0$} \\ \frac{1}{2}, & \text{if $X=1$} \end{cases}$$
We have $\mathbb{E}[X]=\frac{1}{2}$ and $\mathbb{V}[X]=\frac{1}{4}$
Now let's take the rv $\overline{X}_n$, the sample mean. You surely know that
$\mathbb{E}[\overline{X}_n]=\frac{1}{2}$ and $\mathbb{V}[\overline{X}_n]=\frac{1}{4n}$
Thus the mean of the sample mean is $\frac{1}{2}$ and its variance is going to zero as $n \rightarrow +\infty$
Thus you can realize that the sample mean "converges" in probability to the population mean, $\frac{1}{2}$ as its variance is very very close to zero.
Formally these condition on mean and variance is an iff condition for the convergence in $L^2$ which implies the convergence in probability