I'm reading Hansen's (2008, p. 729) Theorem 1 where he bounds the variance of averages of the form $$\hat\Psi(x)=\frac1{Th}\sum\limits_{t=1}^T Y_t K\bigg(\frac{x-X_t}h\bigg)$$ given that $\{(Y_t,X_t)\}_{t=1}^T$ is a colection of random variables and $K$ is a kernel function. Under some conditions he obtained $$Var(\hat \Psi)\leq \frac C{Th}$$
In the framework I'm working on, $X_t=\frac tT$ and thus deterministic. Does it change the rate of convergence (that is, the above upper bound)?
It is important to mention that he is working with mixing sequences (not independent cases). He make assumptions on the boundedness of conditional expectation $\sup_x E(Y_t\mid X_t=x)$ and the density $f$ of $X_t$. But in my case, it doesn't make any sense at all.
Can someone give me suggestions to tackle this problem?
Comment
I already obtained a bound $\frac C{Th^2}$ (discarding assumptions involving the density of $X_t$). In the asymptotic framework $h\to 0, Th^2\to \infty$ as $T\to\infty$ it means the bound I obtained converges slowly than Hansen's. For the experienced people, is this expected or I can improve this result?
Thanks in advance!
I assume that the underlying model is $Y_t=m(X_t)+\epsilon_t$, where $\epsilon_1,\ldots,\epsilon_T$ are i.i.d. $(0,\sigma^2)$. Also let $Z_t:=K((X_t-x)/h)$. Then
$$ \hat{\Psi}(x)=\frac{1}{Th}\sum_{t=1}^T m(X_t)Z_t+\frac{1}{Th}\sum_{t=1}^T\epsilon_t Z_t . $$ Since $X_t$'s are not random, $$ \operatorname{Var}(\hat{\Psi}(x))=\frac{\sigma^2}{(Th)^2}\sum_{t=1}^T Z_t^2=\frac{\sigma^2}{Th}\times\mathsf{E}\frac{1}{h}K^2\!\left(\frac{u_T-x}{h}\right), $$ where $u_T= \lceil Tu\rceil/T$ and $u\sim U[0,1]$. Further analysis depends on the properties of the kernel function. For example, when $K$ is the Gaussian kernel and $x\in (0,1)$, $$ \mathsf{E}\frac{1}{h}K^2\!\left(\frac{u_T-x}{h}\right)\to \frac{1}{2\sqrt{\pi}}. $$