asymptotic normality of m-estimator

417 Views Asked by At

I' struggling with an argument in van der Vaart proof of theorem 5.21 at p.52:

https://books.google.co.uk/books?id=UEuQEM5RjWgC&pg=PA36&lpg=PA36&dq=lemma+4.2+van+der+vaart+one-to-one+differentiable&source=bl&ots=mnRJLEcYHy&sig=ZjAfaVM50LOxJUkphbBT2sJTVcc&hl=it&sa=X&ved=0ahUKEwiglKf31b3JAhWHox4KHS7FCaMQ6AEIHzAA#v=onepage&q=lemma%204.2%20van%20der%20vaart%20one-to-one%20differentiable&f=false

I don't understand when he says

[...] $\mathbb{G}_n \psi_{\hat{\theta}_n}- \mathbb{G}_n \psi_{\theta_0}\rightarrow_p 0$

For a nonrandom sequence $\hat{\theta}_n$ this is immediate from the fact that the means of these variables are zero, while the variances are bounded by $P ||\psi_{\hat{\theta}_n}- \psi_{\theta_0}||^2$.

Suppose parameters are uni-dimensional, hence $||\psi_{\hat{\theta}_n}- \psi_{\theta_0}||^2=|\psi_{\hat{\theta}_n}- \psi_{\theta_0}|^2=(\psi_{\hat{\theta}_n}- \psi_{\theta_0})^2$.

I understand that $\mathbb{E}_P(\mathbb{G}_n \psi_{\hat{\theta}_n}- \mathbb{G}_n \psi_{\theta_0})=0$

I don't understand why the variance is bounded by the expression above. In fact,

$$ Var(\mathbb{G}_n \psi_{\hat{\theta}_n}- \mathbb{G}_n \psi_{\theta_0})= Var(\sqrt{n} (\mathbb{P}_n\psi_{\hat{\theta}_n}-\mathbb{E}_P(\psi_{\hat{\theta}_n}(X_i)) - \mathbb{P}_n \psi_{\theta_0}))= nVar(\mathbb{P}_n\psi_{\hat{\theta}_n}- \mathbb{P_n \psi_{\theta_0}})=n\mathbb{E}_P(\frac{1}{n}\sum_{i=1}^n (\psi_{\hat{\theta}_n}(X_i)- \psi_{\theta_0}(X_i)))^2=\frac{1}{n}\mathbb{E}_P(\sum_{i=1}^n (\psi_{\hat{\theta}_n}(X_i)- \psi_{\theta_0}(X_i)))^2 $$

How can this be less than $\mathbb{E}_P(\psi_{\hat{\theta}_n}(X_i)- \psi_{\theta_0}(X_i))^2$