Fundamental Transformation Method Law of Probabilities: power law

534 Views Asked by At

In the paper, Power-Law Distributions in Empirical Data, the authors express the transformation method to generate random numbers with a distribution of a power law (page 38, Appendix D). Their procedure uses equation D.3 with the complementary cumulative distribution function for the power law, defined as equation 2.6, to arrive at the result of equation D.4. I cannot reproduce equation D.4 starting from equation 2.6 using equation D.3. All equations shown below:

$$P(x) = \int_x^{\infty} p(x') dx' = (\frac{x}{x_{min}})^{-\alpha+1}, \tag{2.6}$$

$$x = P^{-1}(1-r),\tag{D.3} $$

$$ x = x_{min}(1-r)^{-1/(\alpha-1)},\tag{D.4} $$

Equation 2.6 is the normalized complementary cumulative distribution function for the continuous power law. The transformation method is described in the author's biblyorgraphy as Ref. 47 (page 288). The reference describes $P(x)$ as the indefinite integral of $p(x)$ and $P^{-1}(x)$ is the inverse function to $P$.

When I try to reproduce equation D.4 using either the definite integral or indefinite integral I finish with a slightly different result. I think I have a misconception on the mechanics of employing the transformation method or I am missing the last algebra step. Shown below is my worked transformation. The last step is where I am currently stuck.

$$ P(x) = (\frac{x}{x_{min}})^{1-\alpha} $$

$$ P^{-1}(x) = \biggl[(\frac{x}{x_{min}})^{1-\alpha}\biggl]^{-1}$$

$$ P^{-1}(x) = (\frac{x}{x_{min}})^{\alpha-1} $$

$$ P^{-1}(1-r) = (\frac{1-r}{x_{min}})^{\alpha-1}$$

$$P^{-1}(1-r) = \frac{(1-r)^{\alpha-1}}{x_{min}^{\alpha-1}}$$

$$P^{-1}(1-r) = \frac{x_{min}(1-r)^{\alpha-1}}{x_{min}^{\alpha}}$$

$$P^{-1}(1-r) = x_{min}^{1-\alpha}(1-r)^{\alpha-1}$$

$$ P^{-1}(1-r) = \biggl[x_{min}(1-r)^{\frac{\alpha-1}{1-\alpha}}\biggl]^{1-\alpha}$$

1

There are 1 best solutions below

1
On BEST ANSWER

You seem to be confusing yourself with the inverse of a function $f(x)$ and the function given by $g(x) = 1/f(x)$ which are not the same thing. If $F_X(x)$ is the cumulative distribution function of a random variable $X$ then the transformation method for simulating random variables proceeds by drawing $U \sim \mbox{Unif}(0,1)$ and then finding the value $x^*$ such that $F_X(x^*) = U$ this particular value is the inverse of $F_{X}(x)$ and is typically denoted as function $$ F_{X}^{-1} (u) : u \mapsto x(u) \; \mbox{ such that } \; F_X(x(u)) = u. $$

In this instance we are working with the complementary error function $1-F_X(x)$ so we need to solve for $x^* = x(u)$ in $$ \begin{align*} 1 -u &= P(x^*), \end{align*} $$ so setting both sides equal to one another and solving we find $$ \begin{align*} 1 - u = \left(\frac{x}{x_{min}}\right)^{1-\alpha} &\iff \log \left(\frac{x}{x_{min}}\right) = \frac{1}{1 - \alpha}\log(1-u) \\ &\iff x=x_{min}(1-u)^{\frac{1}{1 - \alpha}}, \end{align*} $$ which is equation (D.4).