If $\theta$ is $N(\bar{\theta},\sigma^2_\theta)$, and $s=\theta+\epsilon$, where $\epsilon$ is $N(0, \sigma^2_\epsilon)$, how can I derive that $E(\theta|s)=\frac{\frac{1}{\sigma^2_\theta}\bar{\theta}+\frac{1}{\sigma^2_\epsilon}s}{\frac{1}{\sigma^2_\theta}+\frac{1}{\sigma^2_\epsilon}}$?
It wasn't explicitly stated but I think $\theta$ and $\epsilon$ are independent.
If you could suggest how I can go about deriving the conditional distribution of $\theta|s$ (e.g. via their individual pdfs?) rather than just where the conditional mean above came from, that would be greatly appreciated!
The derivation follows from the standard definition of Bayes' rule with regards to obtaining a posterior from a prior on $\theta$, $\mu(\theta)$ and a distribution $f(s|\theta)$ - e.g. here $s=\theta+\epsilon$ so the distribution $s|\theta$ is $N(\theta, \sigma_{\epsilon}^2)$. The prior and the posterior are called conjugate distributions here because, given properties of the Normal distribution, the posterior distribution is in the same family as the prior distribution.