I am with Elements of Statistical Learning 8.4 Relationship between the bootstrap and bayesian inference.
We observe a single observation $z$ from a normal distribution $z \sim N(\theta,1)$
We carry out a bayesian analysis for $\theta$ and need to specify a prior. We choose $\theta \sim N(0, \tau)$.
This is said to give the posterior probability $\theta|z \sim N(\frac{z}{1+1/\tau},\frac{1}{1+1/\tau})$.
How do we derive this posterior probability ? This is using Bayes' rule, posterior probability distribution equals likelihood times prior.