prove a convergence condition about estimating function

21 Views Asked by At

Suppose that $U_{\beta}=(U_1(\beta),\cdots, U_d(\beta))^T$ is an semiparametric estimating function for $\beta = (\beta_1, \cdots, \beta_d )^T$ based on a random sample of size n, which implies $U_{\beta}$ is sum of n i.i.d. influence functions $\phi(D_i;\beta)$, where $\phi(D_i;\beta), i=1,\cdots,n$, has zero mean and finite variance at the truth . Assume that $U_{\beta}$ is continuous with respect to $\beta$, $\beta_0$ denote the true value of $\beta$. How to verify the following condition for $U_{\beta}$?

C.1. There exists a nonsingular matrix $\mathbf{A}$ such that for any given constant $M$, $$ \begin{aligned} \sup _{\left|\beta-\beta_0\right| \leq M n^{-1 / 2}} \mid n^{-1 / 2} \mathbf{U}(\boldsymbol{\beta}) & -n^{-1 / 2} \mathbf{U}\left(\boldsymbol{\beta}_0\right) \\ & -n^{1 / 2} \mathbf{A}\left(\boldsymbol{\beta}-\boldsymbol{\beta}_0\right) \mid=o_p(1) . \end{aligned} $$ I suppose this matrix $\mathbf{A}$ is just the derivative of $U_\boldsymbol{\beta}$, but I can't prove it rigorously.