$X_1,..., X_n \sim$ Uniform$(0,\theta)$ and $\theta$ has prior distribution Pareto$(\alpha,\theta_0)$. Posterior expectations and variances?

1.5k Views Asked by At

We have IID $X_1,..., X_n \sim$ Uniform$(0,\theta)$ and we have a prior distribution $p(\theta)=\frac{\alpha \theta_0^\alpha}{\theta^{\alpha +1}}\mathbb{1}\{\theta_0\leq\theta\}$ where $\alpha>2$. Let $\textbf{X}=(X_1,...,X_n)$ and $\textbf{x}=(x_1,...,x_n)$. My questions are:

  1. Am I correct in saying that the posterior distribution of $\theta$ is a Pareto$(n+\alpha, \theta_0) $ distribution? And hence is $\mathbb{E}(\theta|\textbf{x})=\frac{(n+\alpha)\theta_0}{n+\alpha-1}$ and $\text{Var}(\theta|\textbf{x})=\frac{(n+\alpha)\theta_0^2}{(n+\alpha-1)^2(n+\alpha-2)}$?

  2. Let $X_1,...,X_n \sim \text{Uniform}(0,\theta^*)$ be IID. Let $Y=\max\{X_1,...,X_n\}$ and assume $\max\{Y,\theta\}\to_{\mathbb{P}(\cdot;\theta^*)}\max\{\theta,\theta^*\}$. I need to find the limit that $\mathbb{E}(\theta|\textbf{X})$ converges to in probablility, and the limit that $\text{Var}(\theta|\textbf{X})$ converges to in probability? I have no idea how to proceed with this one.

For $1$, my reasoning is $$p(\theta|\textbf{x})\propto f(\textbf{x};\theta)p(\theta)=\big(\frac{1}{\theta^n}\big)\frac{\alpha \theta_0^\alpha}{\theta^{\alpha +1}}\mathbb{1}\{\theta_0\leq\theta\}\mathbb{1}\{x_1,...,x_n\in[0,\theta]\}=\frac{\alpha \theta_0^\alpha}{\theta^{n+\alpha +1}}\mathbb{1}\{\theta_0\leq\theta\}\mathbb{1}\{x_1,...,x_n\in[0,\theta]\}$$ And since expectation of a Pareto distribution for $\alpha>1$ is $\frac{\alpha\theta_0}{\alpha-1}$ and its variance for $\alpha>2$ is $\frac{\alpha\theta_0^2}{(\alpha-1)^2(\alpha-2)}$ the result follows?

1

There are 1 best solutions below

0
On

Setting $Y=\max(X_1,\dots,X_n)$ as your text did, the likelihood is

$$L(\theta)=\frac{1}{\theta^n}\cdot \mathbb{1}_{[Y;\infty)}(\theta)$$

The prior is

$$p(\theta)\propto \frac{1}{\theta^{\alpha+1}}\cdot \mathbb{1}_{[\theta_0;\infty)}(\theta)$$

Thus the posterior is

$$p(\theta|\mathbf{x})\propto\frac{1}{\theta^{n+\alpha+1}} \cdot \mathbb{1}_{[\max(Y;\theta_0);\infty)}(\theta)$$

This means that

$$(\theta|\mathbf{x})\sim Pareto\Big[\max(Y;\theta_0);n+\alpha\Big]$$

With mean

$$\mathbb{E}[\theta|\mathbf{x}]=\frac{(\alpha+n)\max(Y;\theta_0)}{\alpha+n-1}$$

In the exercise there is a nice assumption about the convergence of $max(Y;\theta)$ so I think you can conclude by yourself...

Similar reasoning for the variance