Given a list of real number $(a_1, a_2, \ldots,a_n)$ we are interested in the average $\mu = \dfrac{1}{n}\displaystyle\sum_{i = 1}^{n}a_n.$ You are given a simple probabilist algorithm that approximates $\mu:$
Choose a parameter $k$ (is a fixed parameter you determine and thus not choosen at random)
$s := 0$
repeat $k$ times:
$i\leftarrow$ uniformly at random choosen index $1\leq i\leq n$
$s \leftarrow s + a_i$
$X = \dfrac{s}{k}$
return $X$
I'm asked to show that $\text{Var}[X] \leq \dfrac{1}{4k}.$ So far I noticed that clearly $\mathbb{E}[X] = \mu$ and for estimating the variance one probably wants to make use of the formula $\text{Var}[X] = \mathbb{E}[X^2]-\mathbb{E}[X]$ but I don't know how to determine $X^2.$ Could someone give me an advice how to proceed?
I assume that $\forall i$, $a_i\in [0,1]$.
The algorithm can be analyzed by defining the i.i.d random variables $X_1,\ldots,X_k$ , each with uniform distribution over the set $\{a_1,\ldots,a_n\}$.
Then the output of the algorithm is $X=\frac 1k \sum_{i=1}^k X_i$ and $$\operatorname{Var}(X) = \frac 1k \operatorname{Var}(X_1)\leq \frac 1{4k}$$
where the last inequality follows from Popoviciu's inequality.