Why is $O_{\mathbb{P}}(h^3)$ negligible?

33 Views Asked by At

Let $Y_i = m(X_i) + \eta_i$, $W_j = X_j + U_j$, $E[\eta_i | X_i] = 0$ with $X_i \sim f_X$, $U_i \sim f_U$ be an errors-in-variable problem and $K_{U}(x) = \dfrac{1}{2 \pi} \int \mathrm{e}^{-itx} \dfrac{\phi_K(t)}{\phi_{U}(t/h)} \, dt$ with $\phi_K$ characteristic function of a Kernel and $U$ the characteristic function of the error-variable $U$. I have proved that $$(\hat{m}(x) - m(x)) = Z_n + O_{\mathbb{P}}(h^3)$$ with $\hat{m}(x)$ deconvolution kernel density estimator of $m(x)$ and $ Z_n := f_X^{-1}(x) \left( \dfrac{1}{nh} \sum\limits_{j = 1}^n K_U \left( \dfrac{W_j - x}{h} \right) \right)$. Now the paper says that the $O_{\mathbb{P}}(h^3)$ expression is negligible compared to $Z_n$ because of the variance and bias of $\hat{m}(x)$, which are $$Bias(\hat{m}(x)) = \dfrac{1}{2}h^2 \left(m''(x) + 2m'(x) \dfrac{f_X'(x)}{f_X(x)} \right) \int u^2 K(u) \, du + o(h^2)$$ and $$Var(\hat{m}(x)) = \dfrac{1}{2 \pi c^2 f_X^2(x)nh^{2 \beta + 1}} \int |t|^{2 \beta} \phi_K^2(t) \, dt \int \tau^2(x-v)f_X(x-v)f_U(v) \, dv + o \left( \dfrac{1}{nh^{2 \beta + 1}} \right)$$ Now my question is: "Why is $O_{\mathbb{P}}(h^3)$ neglegible compared to $Z_n$?" Thanks!