Posterior for Pareto distribution with Jeffreys prior

862 Views Asked by At

For some known parameter m, the data is IID pareto distribution: $X_1,..,X_n \sim \text{Pareto}(\theta, m)$

$f(x | \theta) = \theta m ^\theta x^{-(\theta + 1)} \textbf{1}{\{m < x \}}$

I need to find a posterior for the Jeffreys prior: $$\pi(\theta) \propto I(\theta)^{1/2}$$

I calculated the prior as follows:

Fisher information: $$I(\theta) = -E\bigg[\frac{\partial^2\log f(X|\theta)}{\partial\theta^2} \bigg]$$

$$\log f(X|\theta) = \log \theta + \theta \log m - (\theta + 1)\log x$$

$$ I(\theta) = -\frac{1}{\theta^2}$$

Now, in order to obtain the posterior, I'm trying to use the fact that the conjugate prior for Pareto is Gamma, and the Jeffreys prior is the limiting case of the conjugate one, but I can't arrive at an appropriate limit. Does this strategy make any sense? Simply multiplying the Likelihood with the obtained Jeffreys prior doesn't seem to work. Any hints highly aprreciated!

2

There are 2 best solutions below

0
On BEST ANSWER

it is not necessary to calucalte the prior with the normalizing constant, thus you can leave

$$\pi(\theta)\propto \frac{1}{\theta}$$

(the minus sign is an obvious error given that in your Fisher's information you have $-E\{\dots\}$)

Noe let's focus on the likelihood (any term not depending on $\theta$ can be canceled)

$$p(\mathbf{x}|\theta)\propto \theta^n\cdot\left( \frac{\Pi_ix_i}{m^n}\right)^{-\theta}=\theta^n\cdot\exp\{-\theta[\Sigma_i\log x_i-n\log m]\}$$

thus the posterior is

$$\pi(\theta|\mathbf{x})\propto \theta^{n-1}\cdot\exp\{-\theta[\Sigma_i\log x_i-n\log m]\}$$

that is a Gamma

$$\pi(\theta|\mathbf{x})\sim\text{Gamma}[n;\Sigma_i\log x_i-n\log m]$$

0
On

Note that your calculation of the prior has two issues: first, the sign is incorrect; we must have $I(\theta) > 0$. Second, you calculated the prior for a single observation. The prior for the sample of $n$ observations is $$I(\theta) = \frac{n}{\theta^2}.$$

With this in mind, it is not difficult to compute the posterior explicitly:

$$\begin{align} f(\theta \mid x_1, \ldots, x_n, m) &\propto f(x_1, \ldots, x_n \mid m, \theta) \pi(\theta) \\ &= \theta^n m^{n\theta} \left(\prod_{i=1}^n x_i\right)^{-(\theta+1)} \frac{\sqrt{n}}{\theta} \\ &\propto \theta^{n-1} \left(\frac{1}{m^n}\prod_{i=1}^n x_i\right)^{-\theta}. \end{align}$$ This is the kernel of a gamma distribution, which we can recognize more easily if we let $$a = n, \quad b = \log \frac{1}{m^n} \prod_{i=1}^n x_i.$$ Then $a$ is the shape and $b$ is the rate.