Generalization of Cramer Rao Lower Bound.

156 Views Asked by At

Let $B(p)$ be a Bernoulli R.V. with mean $p$. Using the Cramer-Rao lower bound we have that for every unbiased estimator $\hat{\theta}$ of the parameter $p$ it holds

$$ E[(\hat{\theta}_n - p)^2] = Var[\hat{\theta}_n] \geq \frac{p (1-p)}{n} $$

Let now $\hat{\theta}$ be an estimator such that

\begin{equation} \lim_{n \to \infty} E[ (\hat{\theta}_n - p)^2] = 0. \qquad (1) \end{equation}

Can we show a (Cramer-Rao style) lower bound for the error of these estimators? I would even be happy with a statement such as:

Let $\hat{\theta}$ be an estimator of $p$ such that (1) holds for all $p \in (0,1)$. Then $$ \lim_{n \to \infty} n E[(\hat{\theta}_n - p)^2] > 0. $$ This would mean that the rate of convergence of all such estimators is greater than $C/n$ for a constant $C$.