I've got this exercise, which I'm trying to work off using an example, but the example seems very different so I'm not sure if what I'm really doing.
- I've got a loss distribution for $\theta$: $l(a,\theta)=\frac{(a-\theta)^2}{\theta(1-\theta)}$
- I also know the the prior distribution $\pi(\theta)$ is Uniformly (continuous) distributed from 0 to 1.
- The likelihood function is Bernoulli, X is the total number of ones in a sequence of n independent Bernoulli trials each with sample space {0,1} so $f(x|\theta) = \theta^x(1-\theta)^{1-x}$, $x=0,1$
How do I find Bayes estimator of $\theta$?
So far, this is what I've tried:
$h(a,\theta)=E[l(a,\theta)]$
$h(a,\theta)=\int_0^1l(a, \theta)\pi(\theta|x)d\theta$
$h(a,\theta)=\int_0^1\frac{(a-\theta)^2}{\theta(1-\theta)}\frac{\Gamma{(3)}}{\Gamma{(x+1)}\Gamma{(2-x)}}{\theta^x(1-\theta)^{1-x}}d\theta$
$h(a,\theta)=\frac{\Gamma{(3)}}{\Gamma{(x+1)}\Gamma{(2-x)}}\int_0^1(a-\theta)^2{\theta^{x-1}(1-\theta)^{-x}}d\theta$
I'm not sure where to go from here
A hint: You want to minimize $$ E[l(a, \theta)]=\int_0^1l(a, \theta)\pi(\theta|x)d\theta. $$
Edit:
In the problem statement you say that each is Bernoulli. I assume this means that each trial is Bernoulli and that we have $n$ such trials. I also let $y=\sum_{i=1}^nx_i$ be the sum of these.
So, the derivative is proportional to (since we set it to 0, it doesn't matter)
$$ \frac{d}{da}\int (a-\theta)^2\theta^{y-1}(1-\theta)^{n-y-1}d\theta=2\int(a-\theta)\theta^{y-1}(1-\theta)^{n-y-1}d\theta=0 $$ and moving things around we get $$ a\int\theta^{y-1}(1-\theta)^{n-y-1}d\theta=\int\theta^{y}(1-\theta)^{n-y-1}d\theta $$ where the integrals resemble Beta densities, so we manipulate them to get $$ a\frac{\Gamma(y)\Gamma(n-y)}{\Gamma(n)}\int\frac{\Gamma(n)}{\Gamma(y)\Gamma(n-y)}\theta^{y-1}(1-\theta)^{n-y-1}d\theta=\frac{\Gamma(y+1)\Gamma(n-y)}{\Gamma(n+1)}\int\frac{\Gamma(n+1)}{\Gamma(y+1)\Gamma(n-y)}\theta^{y}(1-\theta)^{n-y-1}d\theta\\ a=\frac{\Gamma(y+1)\Gamma(n-y)}{\Gamma(n+1)}\frac{\Gamma(n)}{\Gamma(y)\Gamma(n-y)}=\frac{y}{n}=\frac{1}{n}\sum_{i=1}^n x_i $$
With your loss function, this is the Bayes estimator.