Bayesian inference exercise

416 Views Asked by At

I am learning online Bayesian Statistics and I have a test in a couple of days. I have no idea how to solve this exercise, any help will be appreciated. There might be something similar in the quiz...

Statistical decision theory: a decision-theoretic approach to the estimation of an unknown parameter $\theta$ introduces the loss function $L(\theta, a)$ which, loosely speaking, gives the cost of deciding that the parameter has the value $a$, when it is in fact equal to $\theta$. The estimate $a$ can be chosen to minimize the posterior expected loss, $$E(L(a|y))= \int L(\theta,a)p(\theta|y)d\theta$$ This optimal choice of $a$ is called a Bayes estimate for the loss function $L$. Show that:

(a) If $L(\theta, a) = (\theta − a)^2$ (squared error loss), then the posterior mean, $E(\theta|y)$, if it exists, is the unique Bayes estimate of $\theta$.

(b) If $L(\theta, a) = |\theta − a|$, then any posterior median of $\theta$ is a Bayes estimate of $\theta$.

(c) If $k_0$ and $k_1$ are non negative numbers, not both zero, and $L(\theta,a)= k_0(\theta−a)$ if $\theta\geq a$, $k_1(a−\theta)$ if $\theta<a$, then any $k_0$ quantile of the posterior distribution $p(\theta|y)$ is a Bayes estimate of $\theta$.