Convergence of Remainder from Taylor Expansion

372 Views Asked by At

For a distribution function $F$ and its variance functional $T(F)$, it can be shown that the Taylor expansion of $T(F)$ at $F$ in the direction of the empirical distribution function $F_n$ gives the remainder $$R_{1n} = -(\bar X - \mu)^2,$$ where $n$ is the sample size and $1$ means there is only the linear term in Taylor expansion. By the law of iterated logarithm, it is true that $$|R_{1n}| = O(n^{-1}\log\log n)\ a.s.$$ Then it is claimed that $$\sqrt{n} R_{1n} \to 0$$ in probability. Why is this the case, please? Thank you!

1

There are 1 best solutions below

8
On BEST ANSWER

$$\sqrt{n} R_{1n} = -\sqrt{n}(\bar X - \mu)^2 = [\sqrt{n}(\bar X - \mu)]\cdot (\bar X - \mu)$$

$$[\sqrt{n}(\bar X - \mu)] \xrightarrow{d} N(0,\sigma^2), \;\;(\bar X - \mu)\xrightarrow{p}0$$

Then, apply Slustky's Theorem (which does not require distinct random variables).