Where did the mean estimator come from?

92 Views Asked by At

What was the motivation behind the definition of the mean estimator :

$$\hat{\mu}=\frac{1}{N}\sum_{i=1}^N X_{i}$$

Did we come up with this very form through trial and error ?

I do know that it's unbiased but aren't there other estimators that are also unbiased , so why did we favor this particular one ?

1

There are 1 best solutions below

0
On BEST ANSWER

First, it works better to reserve $N$ for the size of a finite population, and to use $n$ for the size of a sample. So if we take a sample $X_1, X_2, \dots, X_n$ of size $n$ from a population that has mean $\mu,$ then we use the sample mean $$\bar X = \hat\mu = \frac 1 n \sum_{i=1}^n X_i$$ as an estimate of $\mu.$

Intuitively, as @Ian comments, it seems reasonable to try using the mean of a random sample to estimate the mean of a population. More formally, this is called the "Method of Moments." The idea is to estimate $\frac 1 n \sum_{i=}^n X_i^k$ as an estimate of $\frac 1 N \sum_{i=1}^N X_i^k$ for a finite population. (A similar expression with an integral is used for some infinite populations.) Using the sample mean $\bar X$ to estimate the population mean $\mu$ is simply the case where $k = 1$ in the Method of Moments.

One good property of an estimator is unbiasedness, and one can show that $\bar X$ is an unbiased estimator of $\mu.$ In symbols: $E(\bar X) = \mu.$

If one is sampling from a normal population, then it turns out that $Var(\bar X) = \sigma^2/n,$ where $\sigma^2$ is the population variance, and also no other unbiased estimator has a smaller variance.

So, taken as an estimate of $\mu,$ the sample mean has two nice properties. (1) It is unbiased (neither systematically too large or too small; aimed at the right target). (2) It has minimal variability (it's aim at the target is optimal).

For normal data, you might try using the sample median or the sample midrange to estimate $\mu.$ (The 'midrange' is halfway between the maximum and minimum values.) Both of these alternative estimators are also unbiased. But both of them are more variable than the sample mean $\bar X.$

Note: You have asked a good question, and there is more to the complete answer than I can discuss here. There are principles of estimation other than the Method of Moments. And there are criteria other than unbiasedness and minimal variances that are used in the search for 'good' estimators.