$W_n = \frac{1}{n}\sum\log(X_i) - \log(X_{(1)})$ with Delta method

546 Views Asked by At

Note: $\log = \ln$.

Suppose $X_1, \dots, X_n \sim \text{Pareto}(\alpha, \beta)$ with $n > \dfrac{2}{\beta}$ are independent. The Pareto$(\alpha, \beta)$ pdf is $$f(x) = \beta\alpha^{\beta}x^{-(\beta +1)}I(x > \alpha)\text{, } \alpha, \beta > 0\text{.}$$ Define $W_n = \dfrac{1}{n}\sum\log(X_i) - \log(X_{(1)})$, with $X_{(1)}$ being the first order statistic.

I wish to show $$\sqrt{n}(W_{n}^{-1}-\beta)\overset{d}{\to}\mathcal{N}(0, v^2)$$ as $n \to \infty$ (convergence in distribution) for some $v^2$.

Here's what I've already shown:

  1. $W_n \overset{p}{\to}\beta^{-1}$.
  2. $X_{(1)} \overset{p}{\to} \alpha$.
  3. The expected values and variances of $\log(X_i)$ for each $i$ and the same for $X_{(1)}$.

It is very obvious that I need to use the Delta method here, but this would require showing that $$\sqrt{n}(W_n - \beta) \overset{d}{\to}\mathcal{N}(0, \text{something})\text{.}$$ I suppose I could approach using the Central Limit Theorem, but it isn't clear to me how this could be done. I can see that by the CLT, we have $$\sqrt{n}\left[\dfrac{1}{n}\sum\log(X_i) - \underbrace{\left(\beta^{-1}+\log(\alpha) \right)}_{\mathbb{E}[\log(X_i)]} \right]\overset{d}{\to}\mathcal{N}\left(0, \underbrace{\beta^{-2}}_{\text{Var}(\log(X_i))}\right)$$ but I'm stuck as to how to proceed from here. By the continuous mapping theorem, I know that $$\log(X_{(1)}) \overset{p}{\to} \log(\alpha)$$ but I'm stuck from here.

2

There are 2 best solutions below

5
On BEST ANSWER

You actually need a slightly stronger result, namely $\sqrt{n}(\log(X_{(1)}) - \log \alpha) \xrightarrow{P} 0$; I am unsure if this actually holds, but you should be able to deduce this from your existing results.

If this is correct, then from the CLT you can apply Slutsky's theorem to deduce $$\sqrt{n}(W_n - \beta^{-1}) = \sqrt{n}\frac{1}{n} \sum \limits_{i = 1}^n \{\log(X_i) - \mathbb{E}[\log X_i]\} + \sqrt{n}(\log \alpha - \log(X_{(1)}))\xrightarrow d \mathcal{N}(0, \beta^{-2}).$$

Applying the Delta-method to the function $g(x) = \frac{1}{x}$ then yields $$\sqrt{n}(g(W_n) - g(\beta^{-1})) \xrightarrow d \mathcal{N}(0, g'(\beta^{-1})^2 \beta^{-2}) = \mathcal{N}(0, \beta^2).$$

Edit: As zhoraster pointed out, the convergence in the beginning of my post does hold. For a simple proof, note that from the formulae in the comments of the question we can get $E[(X_{(1)} - \alpha)^2] = O(n^{-2})$, which is enough to show $\sqrt{n}(X_{(1)} - \alpha) \xrightarrow d 0$. The delta-method then yields $\sqrt{n}(\log X_{(1)} - \log \alpha) \xrightarrow d 0$; Since this limit is constant, the latter convergence also holds in probability.

2
On

Proving the convergence $\sqrt{n}(X_{(1)}-\alpha)\overset{P}{\rightarrow} 0$ (the one with logarithms would follow since $\log$ is differentiable at $\alpha$): $$ P\big(n|X_{(1;n)}-\alpha|>C\big) = P\big(X_{(1;n)}>\alpha+ C n^{-1}\big) = \alpha^{\beta n} (\alpha+C n^{-1})^{-\beta n} \\ = (1+C\alpha^{-1}n^{-1})^{- \beta n}\to e^{-\alpha^{-1}\beta C},\ n\to\infty. $$ It follows that $\{n(X_{(1;n)}-\alpha),n\ge 1\}$ is stochastically bounded, hence $\sqrt{n}(X_{(1)}-\alpha)\overset{P}{\rightarrow} 0$.