Consider a binary random variable $\theta\in\{0,1\}$, I refer to this variable as "the state". Further assume that each possible state happens with equal probability (i.e equal to $1/2$). I cannot observe this state, however I have access to a sample $\{r_i(\theta)\}_{i=1}^n$. This sample constitutes the realization of a random variable $R_i(\theta)$, which is distributed according to the following equation:
$$R_i(\theta)=\begin{cases} p^A\cdot u_A-c+\epsilon_i &\text{if} \quad \theta=1\\ (1-p^A)\cdot u_B-c+\epsilon_i &\text{if} \quad \theta=0\\ \end{cases}$$
where $\epsilon_i\sim\mathcal{N}(0,1/\rho_{\epsilon^2})$ and all the other variables are deterministic. Notice that in some sense I would like to have no noise term and observe the "true" realisations but I instead only have access to noisy realizations of $R_i$, which I denote $r_i$. My question is then the following: Can I use my sample $\{r_i\}_{i=1}^n$ in order to infer the true state of the world $\theta$?. In some Bayesian sense, I was thinking about updating in the following way:
$$\mathbb{P}(\theta=1|\text{observing sample $r_i(\theta)$})=\frac{\mathbb{P}(L(r_i|\theta=1)\cdot \mathbb{P}(\theta=1)}{L(r_i|\theta=1)\cdot \mathbb{P}(\theta=1)+L(r_i|\theta=0)\cdot \mathbb{P}(\theta=0)}$$
where $L(r_i|\theta=1)$ is some sort of "likelihood" of oberving the sample $r_i(\theta)$ given the state of the world? I am being very loose here, but essentially I am interested in computing the probability of this underlying state. What do you think? Do you have any suggestions on how to compute $\mathbb{P}(\theta=1|\text{observing sample $r_i(\theta)$})$?
Let $R_1, ..., R_n \sim \text{IID } \mathcal{N} (\mu_\theta, \lambda)$ where $\mu_\theta$ is the mean parameter and $\lambda$ is the precision parameter (inverse of the variance). Then your likelihood function (taking $\lambda$ as known) is:
$$L_\boldsymbol{r}(\theta) = \prod_{i=1}^n \mathcal{N}(r_i |\mu_\theta, \lambda) \propto \exp \left( \frac{\lambda}{2} \sum_{i=1}^n (r_i - \mu_\theta)^2 \right) = \exp \left( \frac{\lambda}{2} || \boldsymbol{r} -\mu_\theta \boldsymbol{1} ||^2 \right).$$
To facilitate our analysis we calculate the statistics $S_0 \equiv || \boldsymbol{r} -\mu_0 \boldsymbol{1} ||^2$ and $S_1 \equiv || \boldsymbol{r} -\mu_1 \boldsymbol{1} ||^2$ using your data and your (known) mean parameters. Letting $\pi \equiv \mathbb{P}(\theta = 1)$ we then have the posterior:
$$\begin{equation} \begin{aligned} \mathbb{P}(\theta = 1 | \boldsymbol{r}) &= \frac{\pi L_\boldsymbol{r}(1)}{(1-\pi) L_\boldsymbol{r}(0) + \pi L_\boldsymbol{r}(1)} \\ &= \frac{\pi \exp \left( \frac{\lambda}{2} S_1 \right)}{(1-\pi) \exp \left( \frac{\lambda}{2} S_0 \right) + \pi \exp \left( \frac{\lambda}{2} S_1 \right)} \\ &= \frac{\pi \exp \left( \frac{\lambda}{2} (S_1 - S_0) \right)}{1- \pi + \pi \exp \left( \frac{\lambda}{2} (S_1 - S_0) \right)}. \end{aligned} \end{equation}$$
You can find the values $S_0$ and $S_1$ by substitution of your data and your (known) mean parameters. This should give you the posterior you're looking for.