Intuition Behind Unbiased Estimators

429 Views Asked by At

Let $X^\theta=X^\theta_1,\dots,X^\theta_p$ be a random variable depending on a parameter $\theta\in\bf{R}$.

One way of defining 'correctness' of an estimator $e:\bf{R}^p\to\bf{R}$ for $\theta$ is to demand $E(e(X^\theta))=\theta$ for all $\theta$ (unbiased estimator).

However, the more intuitive approach for me would be chosing some prior distribution of $\theta$ such that $(\theta,X^\theta)$ has continuous probability density and demanding for $e(x_1,\dots,x_p)$ to pick the unique maximum in the density line along $X^\theta=x_1,\dots,x_p$.

Is this intuition of any worth? If yes, are there important scenarios where it coincides with the definition of unbiased estimator?

1

There are 1 best solutions below

0
On BEST ANSWER

Unbiasedness can be a very bad thing in some instances. This is not my new discovery but one example of it is: https://arxiv.org/pdf/math/0206006.pdf

What you're describing is sometimes used, and the posterior expected value is also often sometimes used. It is well known that methods based on posterior distributions can avoid pathologies afflicting unbiased estimation.

And there are things like the James–Stein estimator, which is not even a decision-theoretically admissible estimator and yet is superior in mean-square-error sense to all unbiased estimators.