Bound for MSE of the Kernel Density Estimator

38 Views Asked by At

On page 911 of the Paper Adaptivity in convolution models with partially known noise distribution they say: By using classical results on this estimator, we have $$\mathbb{E}_{f, s_k}\left[\left|f_n(x)-f(x)\right|^2\right] \leq O\left(h_n^{2 \beta-1}\right)+O\left(\frac{1}{n}h_n^{2\left(s_k-1\right)} \exp \left(2 / h_n^{s_k}\right)\right).$$

Background: Hereby is $f_n(x)=\frac{1}{n h_n} \sum_{j=1}^n K_n\left(\frac{Y_j-x}{h_n}\right)$ is the kernel density estimator of $f(x)$, which is the unknown density of iid $X_i$s with $Y_i=X_i+\varepsilon_i$.

$f$ is assumed to be in the Sobolev class, i.e. $\int f=1$ and $\frac{1}{2 \pi} \int\left|\mathcal{F}(f)(u)\right|^2|u|^{2 \beta} d u \leq L$.

$K$ is the standard kernel, i.e. $\mathcal{F}(K_n)(u)=\exp(\left(\frac{|u|}{h_n}\right)^{s_n}) 1_{|u| \leq 1}$ with $h_n =\left(\frac{\log n}{2}-\frac{\bar{\beta}-s_n+1 / 2}{s_n} \log \log n\right)^{-1 / s_n}$

And the density $g$ of the $\varepsilon_i$s is of the form $\mathcal F(g)(u)=\exp(-|u|^{s_n})$ with self-similarity index $s_n$.

My thoughts: Using the bias-variance decomposition it remains to show

$\operatorname{Bias}_{f, s_k}(f_n(x),f(x))^2=O\left(h_n^{2 \beta-1}\right)$

$\operatorname{Var}_{f, s_k}(f_n(x))=O\left(\frac{1}{n}h_n^{2\left(s_k-1\right)} \exp \left(2 / h_n^{s_k}\right)\right)$.

Using the argumentation from Washington University I receive:

$\operatorname{Bias}_{f, s_k}(f_n(x),f(x))^2=\frac{1}{4} h_n^4 |f^{\prime\prime}(x)|^2 \mu_K^2+o\left(h_n^4\right)$

$\operatorname{Var}_{f, s_k}\left(f_n\left(x\right)\right)=\frac{1}{n h_n} f\left(x\right) \sigma_K^2+o\left(\frac{1}{n h_n}\right)$.

How can I carry on?