consider a random variable $X$ and stochastically independent repetitions $X_1,...,X_n$ of $X$.
For each vector $\overrightarrow{a}=(a_1,...,a_n) \in \mathbb{R}^{n} \text{ with } a_i > 0 $ we denote with $T_\overrightarrow{a}$ the estimator
$$T_\overrightarrow{a} = a_1 . X_1 + a_2 . X_2 + ... + a_n . X_n$$
for the expected value.
Solution: (which is provided in the textbook)
$$E(T_\overrightarrow{a}) = E(a_1 .X_1 + a_2 .X_2 + ... + a_n .X_n) = a_1 . E(X_1) + a_2 . E(X_2) + ... + a_n . E(X_n) = a_1 . E(X) + a_2 . E(X) + ... + a_n . E(X) =(a_1 + a_2 + ... + a_n) . E(X) \\ \Rightarrow \\ E(T_\overrightarrow{a}) =E(X) \Leftrightarrow a_1 + a_2 + ... + a_n=1 \\ \Rightarrow $$ The class of the unbiased estimators is therefore the set of all $T_\overrightarrow{a}$ with $a_1 + a_2 + ... + a_n=1$.
I know, how to determine all $\overrightarrow{a}$ for which $T_\overrightarrow{a}$ is an unbiased estimator for the expected value $E(X)$ of $X$, but how to do the same for variance $Var(X)$ of $X$? How can the following task be solved like the one provided in the textbook?
The task for which I need help:
consider a random variable $X$, whose mean value is known and stochastically independent repetitions $X_1,...,X_n$ of $X$.
For each vector $\overrightarrow{a}=(a_1,...,a_n) \in \mathbb{R}^{n} \text{ with } a_i > 0 $ we denote with $T_\overrightarrow{a}$ the estimator
$$T_\overrightarrow{a} = a_1 . (X_1-\mu)^2 + a_2 . (X_2-\mu)^2 + ... + a_n . (X_n-\mu)^2$$ for the variance of X.
a) Determine all $\overrightarrow{a}$ for which $T_\overrightarrow{a}$ is an unbiased estimator for the variance $Var(X)$ of $X$.
b) Determine the most effective among the unbiased estimators $T_\overrightarrow{a}$.
My thoughts
a) First try: $$E(T_\overrightarrow{a}) = E(a_1 . (X_1-\mu)^2 + a_2 . (X_2-\mu)^2 + ... + a_n . (X_n-\mu)^2) \\ = a_1 . E(X_1-\mu)^2 + a_2 . E(X_2-\mu)^2 + ... + a_n . E(X_n-\mu)^2 \\ =\sum_{i=1}^n a_i . E(X_i-\mu)^2 \\ \text{because } E(X_i-\mu)^2=\sigma^2 =Var(X) \text{ for all } i\\ \Rightarrow \\ E(T_\overrightarrow{a}) =\sigma^2 .\sum_{i=1}^n a_i \\ \Rightarrow Var(X)= \frac{E(T_\overrightarrow{a})}{\sum_{i=1}^n a_i} \\ \hat{\sigma}^2=\frac{T_\overrightarrow{a}}{\sum_{i=1}^n a_i} $$ Second try: $$Var(T_\overrightarrow{a} ) = a_1^2 . Var(X_1-\mu)^2 + a_2^2 . Var(X_2-\mu)^2 + ... + a_n^2 . Var(X_n-\mu)^2 \\ = a_1^2 . Var(X-\mu)^2 + a_2^2 . Var(X-\mu)^2 + ... + a_n^2 . Var(X-\mu)^2 \\ = (a_1^2 + a_2^2 + ... + a_n^2) . Var(X-\mu)^2 \\ =\sum_{i=1}^n a_i^2Var(X-\mu)^2=c\sum_{i=1}^n a_i^2 \\ $$ for all $i$ and for some constant $c(\ne 0)$.
Should $Var(T)$ be minimized to $E(T)=\sigma^2$, to get a final result?
b) I don't know how to solve it
Thanks :-)