Let us define the random variable $x \in \mathcal{N}(\mu_{x},\sigma_{x}^{2})$. Assume that I observe two different estimates of $x$, denoted by $\hat{x}_{1} = x + e_{1}$ and $\hat{x}_{2} = x + e_{2}$, respectively, where $e_{1} \sim \mathcal{N}(0, \sigma_{e_{1}}^{2})$ and $e_{2} \sim \mathcal{N}(0, \sigma_{e_{2}}^{2})$ are independent estimation errors.
How can I define the conditional distribution of $x$ given $\hat{x}_{1}$ and $\hat{x}_{2}$?
When only one estimate $\hat{x}$ is observed, this becomes a pretty standard problem and the conditional distribution is obtained as $x | \hat{x} \sim \mathcal{N}(\bar{\mu},\bar{\sigma}^{2})$, with
\begin{align} \bar{\mu} & = \mu_{x} + \frac{\sigma_{x}^{2}}{\sigma_{x}^{2} + \sigma_{e}^{2}} (\hat{x} - \mu_{x}), \\ \bar{\sigma}^{2} & = \sigma_{x}^{2} - \frac{\sigma_{x}^{4}}{\sigma_{x}^{2} + \sigma_{e}^{2}} \end{align}
(cf. https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Bivariate_case).
Then, how can we improve the conditional distribution when two (or more) estimates are observed? I have thought about two possibilities, but I don't know if we can do something better:
1) Derive the conditional distribution of $x$ given the average of $\hat{x}_{1}$ and $\hat{x}_{2}$ (weighted with the respective variances);
2) Derive the conditional distribution of $x$ given $\hat{x}_{1}$ and then condition the resulting distribution on $\hat{x}_{2}$.
To expand on my comment you can write the (unnormalised) density as $$ \ln p(x|\hat{\mathbf{x}}) \propto \ln p(x, \hat{\mathbf{x}}) $$ then up to a constant $$ \begin{align*} \ln p(x,\hat{\mathbf{x}}) &= -\frac{1}{2}\sum_{i=1}^{n}\frac{(x-\hat{x}_i)^2}{\sigma_{e_i}^2} - \frac{1}{2\sigma_x^2}(x-\mu_x)^2 \\ &= -\frac{1}{2}\left(\sum_{i=1}^n\frac{1}{\sigma_{e_i}^2} + \frac{1}{\sigma_x^2}\right)x^2 + \left(\sum_{i=1}^n\frac{\hat{x}_i}{\sigma_{e_i}^2}+ \frac{\mu_x}{\sigma_x^2}\right)x. \end{align*} $$ This tells us that the conditional will be Gaussian, if we expand the log-likelihood of a general Gaussian we have $$ \ln p(x ; \mu, \sigma^2) = -\frac{1}{2\sigma^2} x^2 + \frac{\mu}{\sigma^2}x + \mbox{const.} $$ Therefore matching up the coefficients we have $$ \bar{\sigma}^2 = \left(\sum_{i=1}^{n}\frac{1}{\sigma_{e_i}^2}+\frac{1}{\sigma_x^2}\right)^{-1} $$ and $$ \bar{\mu} = \sigma^2 \cdot \left( \sum_{i=1}^{n} \frac{\hat{x}_i}{\sigma_{e_i}^2}+\frac{\mu_x}{\sigma_{x}^2}\right). $$
And when $n=1$, $$ \begin{align} \bar{\sigma}^2 &= \left(\frac{\sigma_x^2 + \sigma_e^2}{\sigma_x^2\sigma_e^2}\right)^{-1} \\ &= \frac{\sigma_x^2\sigma_e^2}{\sigma_x^2 + \sigma_e^2} = \sigma_x^2 - \frac{\sigma_x^4}{\sigma_x^2 + \sigma_e^2} \end{align} $$ and $$ \begin{align} \bar{\mu} &= \sigma \cdot \left(\frac{\mu_x}{\sigma_x^2} + \frac{\hat{x}}{\sigma_e^2}\right) \\ &= \frac{\sigma_x^2\hat{x} + \sigma_e^2 \mu_x}{\sigma_x^2 + \sigma_e^2} \\ &= \mu_x + \frac{\sigma_x^2}{\sigma_x^2 + \sigma_e^2}\hat{x} + \frac{\sigma_e^2}{\sigma_x^2 + \sigma_e^2}\mu_x - \frac{\sigma_x^2 + \sigma_e^2}{\sigma_x^2 + \sigma_e^2}\mu_x \\ &= \mu_x + \frac{\sigma_x^2}{\sigma_x^2 + \sigma_e^2}(\hat{x} - \mu_x). \end{align} $$