How to prove the Delta Method?

396 Views Asked by At

As exercise 9.14 of "Measure theory and probability theory" by Krishna and Soumendra, I'm trying to demonstrate the delta method.

We have a random variable sequence $X_n$, a divergent sequence of real number $a_n \rightarrow +\infty$ and a function $H$ differentiable in $\theta$, so $H'(\theta)=c$, all such that:

\begin{gather} a_n \left( X_n - \theta\right) \longrightarrow^d Z \ \ \ \text{(converge in distribution)} \end{gather}

I want to show that

\begin{gather} a_n \left( H(X_n) - H(\theta) \right) \longrightarrow^d cZ \ \ \ \end{gather}

The book suggest to use the Taylor's expansion \begin{gather} H(X)-H(\theta) = c(X - \theta) + R(X)(X - \theta) \end{gather}

where $R(x)\rightarrow0$. Now considering the claim

\begin{gather} a_n \left( c(X_n - \theta) + R(X_n)(X_n - \theta) \right) \longrightarrow^d cZ \ \ \ \end{gather}

it's clear that $a_nc(X_n - \theta)$ tends to $cZ$ by Slutsky theorem but how to show that the remainder part tends to $0$? The book seems to suggest that $R(X_n)$ may be stochastically bounded that would be sufficient, but how to demonstrate it?

1

There are 1 best solutions below

1
On

Here (in Italian, as you are) you can find the complete proof, but I want suggest another approach:

Using Cramér Rao's inequality we have, in general, the following lower bound for Variance of estimators

$$\mathbb{V}[T]\geq \frac{\Big[\frac{\partial}{\partial\theta}\mathbb{E}_{\theta}[T]\Big]^2}{nI(\theta)}$$

Where $I(\theta)$ is the Fischer Information

If we consider $T$ restricted to the class of unbiased estimator for

  1. $\theta$, we have

$$\mathbb{V}[T]\geq \frac{1}{nI(\theta)}$$

  1. $g(\theta)$, we have

$$\mathbb{V}[T]\geq \frac{\Big[g'(\theta\Big]^2}{nI(\theta)}=\mathbb{V}[\hat{\theta}]\times \Big[g'(\theta)\Big]^2$$