Does anyone know where the estimator $\hatθ = X _{(1)} + X_ {(n)}$ for a U(0, θ) distribution comes from?
Where:
$X _{(1)}$ = min$_i (X_i)$
$X_ {(n)}$ = max$_i (X_i)$
I know it is not the MLE, MME or UMVUE. Could anyone shed some light on where this estimator may come from? I can guess that it has to do with the fact that they're order statistics.
Thanks!
The only (somehow) meaningful origin that I can think of, is an estimator of $EX = \frac{a+b}{2} = \frac{\theta}{2}$, that was modified appropriately. I.e., clearly $$ \hat{\theta}_n = \frac{1}{2}(X_{(1)} + X_{(n)}) $$ is consistent estimator of $EX$ (which for some cases can be unbiased as well). I don't know if it has any special name, but I would call it "plug in" estimator (because you just substitute the parameters with their MLE estimators). So, by continuous mapping theorem you preserve the consistency property. However, note that in case of $(0,\theta)$ this estimator is "bad" (inadmissible), because it uses an estimator for the lower bound of the distribution (which is known to be - $0$), a fact that adds absolutely unnecessary "noise" which inflates the risk of your estimator.