I've understood everything in the picture (source) below except for this equality:
$$\hat{\sigma} = \frac{\hat{\alpha} - 1}{\sqrt n}$$
Can someone please explain where does it come from?
I've understood everything in the picture (source) below except for this equality:
$$\hat{\sigma} = \frac{\hat{\alpha} - 1}{\sqrt n}$$
Can someone please explain where does it come from?
On
What the source calls "power-law" is more commonly known as a Pareto distribution. The PDF is $$f_X(x) = \frac{\alpha - 1}{x_0} \left(\frac{x}{x_0}\right)^{-\alpha} \mathbb 1(x > x_0),$$ where I have used $x_0 = x_{\text{min}}$ for convenience. The conditional MLE given $x_0$, $$\hat \alpha = 1 + \left(\frac{1}{n} \sum_{i=1}^n \log \frac{x_i}{x_0} \right)^{-1}$$ suggests that we should consider the distribution of the (conditional) log-transformed variable $Y \mid x_0 = \log \frac{X}{x_0}$. Its PDF is given by $$\begin{align*} f_Y(y) &= f_X(x_0 e^y) \left|\frac{d}{dy}\left[x_0 e^y\right]\right| \\ &= \frac{\alpha - 1}{x_0} \left( \frac{x_0 e^y}{x_0}\right)^{-\alpha} x_0 e^y \mathbb 1(x_0 e^y > x_0) \\ &= (\alpha - 1) e^{-(\alpha - 1) y} \mathbb 1 (y > 0), \end{align*}$$ which means $$Y \sim \operatorname{Exponential}(\lambda = \alpha - 1);$$ that is, $Y$ is exponentially distributed with rate parameter $\alpha - 1$. Being the sum of IID exponential distributions, it follows that $$\sum_{i=1}^n \log \frac{x_i}{x_0} \sim \operatorname{Gamma}(n, \alpha - 1)$$ and $$\frac{1}{n} \sum_{i=1}^n \log \frac{x_i}{x_0} \sim \operatorname{Gamma}(n, n(\alpha-1)),$$ where the parametrization is by shape and rate. Consequently, $$W = \hat \alpha - 1 \sim \operatorname{InverseGamma}(n, n(\alpha-1)),$$ with PDF $$f_W(w) = \frac{(n(\alpha-1))^n e^{-n(\alpha-1)/w}}{w^{n+1} \Gamma(n)} \mathbb 1(w > 0).$$ This has mean $$\operatorname{E}[W] = \frac{n}{n-1}(\alpha - 1)$$ and variance $$\operatorname{Var}[W] = \frac{n^2}{(n-1)^2(n-2)} (\alpha - 1)^2.$$ This gives an exact standard deviation of the MLE in terms of the parameter $\alpha$, which is consistent with the asymptotic estimated standard deviation as $$\sqrt{\widehat{\operatorname{Var}}[\hat \alpha]} \approx \frac{\hat \alpha - 1}{\sqrt{n-2}} \approx \frac{\hat \alpha - 1}{\sqrt{n}}$$ for sufficiently large $n$, but note that the true standard deviation of the MLE is strictly larger.
Following the comment posted above, calculating the Fisher information is straightforward: $$-\frac{\partial^2}{\partial \alpha^2}\left[\mathcal L(\alpha \mid \boldsymbol x, x_0)\right] = \frac{n}{(\alpha - 1)^2},$$ hence the asymptotic standard deviation of the MLE is $$\hat \sigma = \frac{1}{\sqrt{\mathcal I(\hat \alpha)}} = \frac{\hat \alpha - 1}{\sqrt{n}}.$$
The likelihood is a Pareto distribution with shape $\alpha-1$, i.e. of the form
$$L(\alpha\mid x)=\frac{(\alpha-1)k^{\alpha-1}}{x^\alpha}\mathbf1_{x>k>0}\quad,\,\alpha>1$$
(I am denoting $x_{\min}$ as $k$ here).
The MLE of $\alpha$ obtained on the basis of the sample $(X_1,\ldots,X_n)$ is $$\hat\alpha=1+\frac{n}{T}\,,$$
where $$T=\sum_{i=1}^n \ln\left(\frac{X_i}{k}\right)$$
Note that $\ln (X_i/k)$ is exponential with mean $1/(\alpha-1)$, so that
$$E(T)=\frac n{\alpha-1}$$
and $$\operatorname{Var}(T)=\frac n{(\alpha-1)^2}$$
Now variance of $\hat\alpha$ is
$$\operatorname{Var}(\hat\alpha)=n^2\operatorname{Var}\left(\frac1T\right)$$
By a first order Taylor expansion we have for large $n$,
$$\operatorname{Var}\left(\frac1T\right)\approx \frac{\operatorname{Var}(T)}{(E(T))^4}=\frac{(\alpha-1)^2}{n^3}$$
Hence,
$$\operatorname{Var}(\hat\alpha)\approx \frac{(\alpha-1)^2}{n}$$
So a large sample estimate of the standard error is $$\widehat{\text{S.E.}(\hat\alpha)}=\frac{\hat\alpha-1}{\sqrt n}$$