Expected squared difference between between the ML estimator and the posterior expectation.

24 Views Asked by At

Let $\theta$ be a random parameter with support $[0,1]$ and positive density, and let $X_1,X_2,\ldots ~ \rm N(\theta,1)$ be its i.i.d. observations. Does $$ \rm E\Big[n\cdot \Big(\hat \theta_n(X_1,\ldots, X_n) - \rm E[\theta| X_1,\ldots, X_n] \Big)^2\Big],$$ where $\hat\theta_n(X_1,\ldots, X_n)$ is the maximum likelihood estimator and $\rm E[\theta| X_1,\ldots, X_n]$ is the Bayesian MSE estimator, converge to 0 as $n\to \infty$?

I can show that $n\Big(\hat \theta_n(X_1,\ldots, X_n) - \rm E[\theta| X_1,\ldots, X_n] \Big)^2$ converges to 0 a.s., but it does not yet follow that its expectation should converge to 0 as well.

Is there any well-known theorem that could be readily applied to prove the claim?