Find the posterior distribution and posterior risk

737 Views Asked by At

I have this problem,

Let $X\sim U(0,\theta)$ with $\theta>0$. Assume a signal random sample $X$, the squared error loss, and the prior $\pi(\theta) = \exp(1)$ i.e.

$\pi(\theta) = \theta e^{-\theta}$ for $\theta>0$

(a) Find the posterior distribution of $\theta$.

(b) Show that the posterior risk of an estimate of $\hat{\theta}$ is given by $e^{x}\int_{x}^{\infty}(\hat{\theta}-\theta)^{2}e^{-\theta}\, d\theta$

My computation for (a) gets me the wrong computation for (b). As I'm seeing this I should compute $\pi(\theta|x) = \frac{\pi(\theta)\cdot \pi(x|\theta)}{\int_{0}^{\infty}\theta e^{-\theta}\, d\theta}$ which is $\frac{\theta e^{-\theta}\cdot \frac{1}{\theta}}{1}=e^{-\theta}$.

However, when I then try to compute the posterior risk I have no function in $x$. Someone has told me that I should be integrating from $x$ to infinity in the denominator of my calculation, but I can't see why that would be true because I thought the definition required integrating over all $\theta$ values.

1

There are 1 best solutions below

2
On

This solves (a).

Key-point: always mention their domains in the densities of random variables.

Note that $\pi(\theta\mid x)=\pi(\theta)\pi(x\mid\theta)\pi(x)^{-1}$ with $\pi(\theta)=\theta\mathrm e^{-\theta}\mathbf 1_{\theta\gt0}$ and $\pi(x\mid\theta)=\theta^{-1}\mathbf 1_{0\lt x\lt\theta}$.

Thus, for every $x\geqslant0$, $\pi(\theta\mid x)=\pi(x)^{-1}\mathrm e^{-\theta}\mathbf 1_{\theta\gt x}$. Integrating $\pi(\ \mid x)$ should give $1$ hence the normalizing factor is $\pi(x)=\int\mathrm e^{-\theta}\mathbf 1_{\theta\gt x}\mathrm d\theta=\int\limits_x^\infty\mathrm e^{-\theta}\mathrm d\theta=\mathrm e^{-x}$ and $$ \pi(\theta\mid x)=\mathrm e^{-(\theta-x)}\mathbf 1_{\theta\gt x}. $$ In words, conditionally on $x\geqslant0$, the distribution of $\theta$ is the distribution of $x+$ a standard exponential random variable independent of $x$.