In Wikipedia the following is said about the concept of estimator:
"In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.[1] For example, the sample mean is a commonly used estimator of the population mean. "
Now I know that:
The estimator, is some sort of function that is applied on the data set / sample size. My question is, does the estimator, if we consider point estimators, gives me a value for each data value. If I have a data set of N values, then would I get N values when utilizing the estimator or would I get a single value? The fact that one can talk about the expected value of an estimator, leads me to believe that the estimator returns a value for each individual data from a sample. But I am not sure. Plus additionally, when we talk about expected values, usually we have in mind (in practice and real life) a physical quantity or characteristic that characterizes the sample size and can be measured. So how does it make sense to talk about expectation value of a function, in this case of the estimator?
The estimand is the parameter of the population.
The estimate, is what? The value of the parameter of the population but received from the estimator when considering the sample?
An example is given in wikipedia and that can be read above. If the sample mean is the estimator for the population mean, then is it correct to say that the population mean is the parameter and the value of the sample mean is the estimate?
Can we have different estimators for one population parameter. Like in the example above, are there other estimators for the population mean other than the sample mean?
I'd really like an example where I can see how all these concepts are connected. But most important for me is the first question. Does the estimator takes the entire sample as input and gives a single value back, or does it gives as many values as data points the sample has? As I said above, the fact that we talk about the expected value of the estimator (which confuses me since the estimator is a function and I have never considered the expectation value of a function, but only of physical quantities that can be measured) for a data set, then we should receive as many values as data points in the sample
An estimator is a function that takes the sample values as input, and gives some kind of value as output. What form that output takes depends on what population property it's an estimator of, but typically one sample gives one estimate.
For example, if I take a sample of 100 people and measure their height, and use that to estimate the average height in the population, then:
The estimand is the average height of the population (say, 150 cm)
The estimator is the random variable calculated as the sample average, $\frac{1}{N}\sum X_i$
The estimate is the actual measured value of the sample average (say, 147 cm)
You can have multiple estimators for the same parameter, and a lot of sampling theory is about defining different estimators and examining their properties. For example, instead of taking the sample mean as-is I could give different people in the sample different weights depending on their sex, and that would give me an alternative estimator. I could even just take the tallest height of all the people in the sample as my estimator - it might not be a good choice, but it's a valid one.
EDIT to answer questions from the comments:
It's both. It's a random variable calculated as a function of sample values, which are themselves random variables.
It's still an estimator for the average height of the population. There isn't one canonical estimator per parameter, you just put together any combination of sample values you like and you've got a new estimator for any parameter you want (but it probably won't be a good estimator).
It depends a little on how you structure things, but under certain common assumptions then you can show that the expected value of the sample mean is exactly equal to the population mean, which means it's an unbiased estimator of the population mean.