I'm working on a mathematical statistics problem.
Let $X_{i},...,X_{n}$ be a sample from a probability distribution with density $p_{\theta}(x)=\theta x^{\theta-1}$ for $0 \leq x \leq 1$ and $0$ elsewhere, with $\theta>0$ unknown.
Now I want to determine the Bayes estimator for $\theta$ with respect to prior density $\pi$ given by $\pi(\theta)=e^{-\theta}$ for $\theta>0$ and $0$ elsewhere.
So my understanding on how to tackle this problem goes so far: we first note that both densities line up with respect to their domains for $\theta$ and $x$, which makes them workable. Now we know that the Bayes estimator will be the expected value for the posterior function, which is given by $p_{\theta}(x) \pi(\theta)$. Thus, we will compete the posterior and then compute its expected value.
$p_{\theta}(x) \pi(\theta) = \theta x^{\theta-1} e^{-\theta}$, which we can't really simplify. Thus we take its expected value and find
$E(p_{\theta}(x) \pi(\theta)) = \frac{1}{n} \sum_{i=1}^{n} \theta x_{i}^{\theta-1} e^{-\theta} = \frac{\theta e^{\theta}}{n} \sum_{i=1}^{n} x_{i}^{\theta-1}.$
But this solution doesn't site quite right to me, I expect to find a more intuitive expectant, one that is standard.
So where do I go wrong? Any hints/suggestions would be much appreciated!
I want to determine the Bayes estimator for $\theta$ with respect to prior density $\pi$ given by $\pi(\theta)=e^{-\theta}$ for $\theta>0$ and $0$ elsewhere.
Note that both densities line up with respect to their domains for $\theta$ and $x$, which makes them workable. Now we know that the Bayes estimator will be the expected value for the posterior function, which is given by $p_{\theta}(x) \pi(\theta)$. Thus, we will compete the posterior and then compute its expected value. I will not bother about the normalization of our posterior function, so I will leave that out of the computations.
First we will compute $\prod \limits_{i=1}^{n} p_{\theta}(x)$ and it follows that
\begin{equation} \begin{split} \prod\limits_{i=1}^{n} p_{\theta}(x) &= \prod_{i=1}^{n} \theta X_{i}^{\theta-1}\\ &= e^{\log \prod_{i=1}^{n} \theta X_{i}^{\theta-1}}\\ &= e^{\log \theta^{n} \prod_{i=1}^{n} X_{i}^{\theta-1}}\\ &= e^{\log \theta^{n}} e^{\sum\limits_{i=1}^{n}(\theta-1)\log X_{i}}\\ &=\theta^{n} e^{\sum\limits_{i=1}^{n}(\theta-1)\log X_{i}}. \end{split} \end{equation}
Now we compute the posterior by multiplying the above by our prior density $\pi(\theta)$. We then have
\begin{equation}\label{eqgood} \begin{split} \theta^{n} e^{\sum\limits_{i=1}^{n}(\theta-1)\log X_{i}} e^{\theta} &= \theta^{n} e^{-\theta +\theta \sum\limits_{i=1}^{n} \log X_{i} - \sum\limits_{i=1}^{n} \log X_{i}} \end{split} \end{equation}
Which gives
\begin{equation}\label{eqbad} \begin{split} \theta^{n} e^{-\theta +\theta \sum\limits_{i=1}^{n} \log X_{i} - \sum\limits_{i=1}^{n} \log X_{i}} &= \theta^{n} e^{-\theta +\theta \sum\limits_{i=1}^{n} \log X_{i}}. \end{split} \end{equation}
From the above we can see that this is a gamma distribution with $\alpha= n+1$ and $\lambda=1-\sum\limits_{i=1}^{n}\log (X_{i})$. Then the bayes estimator is the expected value, which for a gamma distribution is equal to $\frac{\alpha}{\lambda}$, which gives $\frac{n+1}{\lambda=1-\sum\limits_{i=1}^{n}\log (X_{i})}$.