Convergence rate of an estimator

791 Views Asked by At

Say we are interested in estimating some unknown real scalar parameter $\alpha$ using data. Suppose the estimator $\widehat \alpha_N$ of $\alpha$ using the data is consistent. I want to know what it means for the convergence rate to be $g(N)$. Are there any good references? Is there a formal definition?

The following, I think, are equivalent ways (even redundant) to say that the convergence rate is g(N). For all large enough finite $N$,

(1) $\widehat \alpha_N = \alpha + O(g(N))$

(2) $|\widehat \alpha_N - \alpha| = O(g(N))$

(3) $E[(g(N))^{-1} (\widehat \alpha_N - \alpha)]=0$ and $Var((g(N))^{-1} (\widehat \alpha_N- \alpha))=c_1$

(4) $(g(N))^{-1} (\widehat \alpha_N - \alpha) \stackrel{d}{\rightarrow} N(0,V)$

(5) $Pr((g(N))^{-1} (\widehat \alpha_N - \alpha)<\varepsilon)= c_2$

Are the above correct ways to define convergence rates? Are there any other ways to define convergence rate of an estimator?