Analytical form of Jeffrey's prior

2.1k Views Asked by At

Derive, analytically, the form of Jeffery's prior for $p_J(\lambda)$ for the parameter $\lambda$ of a Poisson likelihood, where the observed data $y = (y_1, y_2,...,y_n)$ is a vector of i.i.d draws from the likelihood. That is, each $y_i \sim$ Poisson $(\lambda)$.

Also, recall that Jeffery's prior $p_J(\lambda) \propto \sqrt{I(\lambda)}$ which is defined in terms of the Fisher Information matrix:

$$I(\lambda) = \mathbb{E}[\frac{\partial^2 \log{p(y|\lambda)}}{\partial\lambda^2}],$$

where $\mathbb{E}$ denotes expectation, and $p(y|\lambda)$ is the joint likelihood of $y$.

I'm having trouble understanding this question. Would someone be able to explain what's going on? From what I can understand, we have:

$$p_J(\lambda) \propto\sqrt{\mathbb{E}[\frac{\partial^2 \log{p(y|\lambda)}}{\partial\lambda^2}]}$$

but I don't really know how to proceed further. Would really appreciate a detailed explanation - thank you in advance!

1

There are 1 best solutions below

0
On BEST ANSWER

You can proceed by using the information given to you. You know that $y_i \sim \text{Poisson}(\lambda)$ for all $i=1,\dots,n$ and the $y_i$ are independent. Now you can derive the form of the log-likelihood $\log p(y\mid \lambda)$. By independence \begin{align} \log p(y\mid \lambda) &= \log \prod_{i=1}^n p(y_i\mid \lambda) = \log \prod_{i=1}^n e^{-\lambda}\frac{\lambda^{y_i}}{y_i!} = \sum_{i=1}^n (-\lambda + y_i\log(\lambda) - \log(y_i!))\\ &= -n\lambda + \log(\lambda)\sum_{i=1}^n y_i - \sum_{i=1}^n \log(y_i!) \end{align} In order to obtain Jeffrey's prior we find the second derivative with respect to $\lambda$ using the above. $$ \frac{\partial^2}{\partial \lambda^2} \log p(y\mid \lambda) = \frac{\partial}{\partial \lambda}\left(-n + \frac{1}{\lambda}\sum_{i=1}^n y_i\right) = - \lambda^{-2}\sum_{i=1}^n y_i $$ Now taking the expected value and using the fact that if $\hat{y} \sim \text{Poisson}(\lambda)$, then $\mathbb{E}[\hat{y}] = \lambda$ we obtain $$ \mathbb{E}\left[\frac{\partial^2}{\partial \lambda^2} \log p(y\mid \lambda) \right] = - \lambda^{-2} n \lambda = -\frac{n}{\lambda} $$ The Fisher information matrix admits the form $\mathcal{I}(y) = - \mathbb{E}\left[\frac{\partial^2}{\partial \lambda^2} \log p(y\mid \lambda) \right]$ under certain regularity conditions, which are met here. Thus Jeffrey's prior in this case is given by $$ p_J(y) \propto \sqrt{\mathcal{I}(y)} = \sqrt{\frac{n}{y}} \propto y^{-\frac{1}{2}} $$