Let $X_1, X_2, X_3, \ldots, X_n$ be a random sample for $N(e,1)$. Let the prior p.d.f. of $e$ be $N(0,\sigma^2)$ under the square error loss function $L(e,d)={(d-e)}^2$. Find the Bayes' decision rule and the Bayes' risk of the Bayes' rule.
EDIT: Thanks for the help but I am just not able to the Bayes' risk by myself and the instructor is being very supportive for this problem. This is a simple homework problem which he gave casually in the class. But it has become a pain in my a**. Please help me solve the Bayes' risk as well. Thank you.
This may be somewhat backwards, but I would actually find the Bayes' rule first and then derive the risk afterwards.
Rewriting your $e$ as $\mu$ to follow convention, note that under squared error loss, the Bayes decision rule is the mean of the posterior distribution of $\mu$. $$X_1, \ldots, X_n \sim N(\mu, 1) \quad \textrm{and} \quad \mu \sim N(0, \sigma^2)$$ To find the posterior distribution, we only concern ourselves with the kernel of the density (i.e. only with parts of the density that involve $\mu$). This will simplify things. $$\begin{align*} \pi(\mu \,|\, x) &\propto \exp \left\{ -\frac{1}{2} \left( \frac{\sum(x_i - \mu)^2}{1} + \frac{\mu^2}{\sigma^2} \right) \right\}\\ &\propto \exp \left\{ -\frac{1}{2\sigma^2} \left( \sigma^2 \sum(x_i - \mu)^2 + \mu^2 \right) \right\} \end{align*}$$ Next, write $\sum(x_i - \mu)^2$ as $\sum(x_i - \overline{x} + \overline{x} + \mu)^2$ to see that it simplifies to $\sum(x_i - \overline{x})^2 + n(\overline{x} - \mu)^2$. The first term does not involve $\mu$ so we drop it to eliminate the summation, and expand the second term to get: $$\begin{align*} \pi(\mu \,|\, x) &\propto \exp \left\{ -\frac{1}{2\sigma^2} \left( \sigma^2 (n\mu^2 - 2 n \overline{x} \mu) + \mu^2 \right) \right\}\\ &\propto \exp \left\{ -\frac{1}{2\sigma^2} \left( \mu^2(\sigma^2 n + 1) - 2\mu(\sigma^2 n \overline{x}) \right) \right\}\\ &\propto \exp \left\{ -\frac{1}{2\frac{\sigma^2}{\sigma^2 n + 1}} \left( \mu^2 - 2\mu \frac{\sigma^2 n \overline{x}}{\sigma^2 n + 1} \right) \right\}\\ &\propto \exp \left\{ -\frac{1}{2\frac{\sigma^2}{\sigma^2 n + 1}} \left(\mu - \frac{\sigma^2 n \overline{x}}{\sigma^2 n + 1} \right)^2 \right\} \end{align*}$$ This is the kernel of a normal distribution with mean $\frac{\sigma^2 n}{\sigma^2 n + 1} \overline{x}$. So that is your Bayes' decision rule.
You can see why this makes sense. Normally, without prior information, we would just estimate the mean by $\overline{x}$. But since we know that $\mu$ is centered at 0, we take that information into account by shrinking the naive estimator $\overline{x}$ towards 0 by the factor $\frac{\sigma^2 n}{\sigma^2 n + 1}$. Furthermore, if we have lots of data (i.e. $n$ is large) or if the variance of $\mu$ is known to be large, then the shrinkage factor is closer to 1, meaning we give less weight to our prior knowledge.
To find the Bayes' risk, just apply the bias-variance decomposition to the Bayes' rule to find the "ordinary" risk, and apply the definition of Bayes' risk. Then simply evaluate the integral to get the Bayes' risk.