An unknown parameter $\theta$ is randomly drawn at time $t=0$ according to prior p.d.f. $\mu_0(\cdot)$ that has support $[L,R]\subseteq\mathbb{R}$.
At each time $t\in\{1,2,...\}$ an agent makes an estimate $a_t\in\mathbb{R}$ of $\theta$ and observes an outcome $y_t\,|\,(a_t,\theta) \sim\text{Bernoulli}(\sigma(a_t,\theta))$, where $\sigma:\mathbb{R}^2\to[0,1]$ yields the probability of success given $(a_t,\theta)$. (Assume $y_t$ is independent of all draws $\{y_\tau\}_{\tau\neq t}$ and estimates $\{a_\tau\}_{t\neq t}$ from other periods.) Let $\mu_t(\cdot|a_1,y_1,...a_t,y_t)$ denote the posterior belief (formed using Bayes' rule) after observing the sequence $\{(\color{red}{a_\tau},\color{blue}{y_\tau})\}_{\tau=1}^t$ of $\color{red}{\text{estimates}}$ and $\color{blue}{\text{outcomes}}$.
My (Soft) Question: which $\mu_0$ and $\sigma$ would make analytically calculating posterior belief $\mu_t(\cdot|\cdot)$ tractable?
Small Request: $\sigma$ could indeed be "anything," but ideally I would like something that is monotonically decreasing in how "far'' $a_t$ and $\omega$ are. (E.g. $\sigma(a_t,\theta)=\frac{1}{1+(a_t-\theta)^2}).$ To set ideas, I will now describe a class of functions that seems "reasonable" to me.
Let $\sigma(a_t,\theta)=h(\theta-a_t)$ where $h:R\to[0,1]$ satisfies
- $h(0)=1>\max_{x\in\mathbb{R}\setminus\{0\}}$ and
- $h(x)=h(-x) \text{ and } h'(x)= -h'(-x) \ \forall x\in\mathbb{R}$.
(i.e. unimodal and symmetric about $x=0$, where it achieves a maximum value of $1$).