This problem is motivated by attempting to construct a total ordering out of an arbitrarily large set of potentially contradictory partial orderings.
Let's assume we have some set of items I for which people (humans) have some degree of affinity or desire or what-have-you for. The affinity each human has for an item i is drawn from a Gaussian with mean $\mu_i$ where $0 <= \mu_i <= 1$ and variance $\sigma_i$.
Humans do not directly report these values, however. Instead, they select from pairs of items the item they like more. So we have that person j liked item m more than item n, person x like n more than k, and so on and so forth. Given these tuples, is it possible to estimate the means for these distributions, and thereby construct a total order over all items in the set?
You could try the following:
Let $\pi_n\left( n_i | \mu _i, \mu _j\right)$ be the marginal PDF describing the probability that $n$ comparisons between items $i$ and $j$ drawn from processes with means $\mu _i$ and $\mu _j$ would yield $n_i$ instances of $i$ being favoured over $j$. You could compute this distribution easily either by using basic statistics or (the lazy way) via Monte Carlo simulation.
Next you would have to make an assumption about the joint PDF of ${{\mu _i},{\mu _j}}$, denoted $\pi\left( {{\mu _i},{\mu _j}}\right)$. Naturally we would expect any two means not to be correlated, hence $$\pi\left( {{\mu _i},{\mu _j} } \right) = \pi\left( {{\mu _i}} \right)\pi\left( {\mu _j} \right)$$ but beyond this we need to make assumptions about the distribution for any given $\mu$. I'll leave this up to you. One obvious choice would be that the means are evenly distributed over the interval $\left[0,1\right]$, in which case $$\pi\left( {{\mu _i},{\mu _j}}\right) = 1\;\forall \mu _i ,\mu _j$$
Given $\pi_n\left( n_i | \mu _i, \mu _j\right)$ and $\pi\left( {{\mu _i},{\mu _j}}\right)$, you can compute the full PDF $${\pi _n}\left( {x,{\mu _i},{\mu _j}} \right) = {\pi _n}\left( {{n_i}|{\mu _i},{\mu _j}} \right)\pi \left( {{\mu _i},{\mu _j}} \right)$$
It then follows from basic Bayesian probability that $${\pi _n}\left( {{\mu _i},{\mu _j}|{n_i}} \right) = \frac{{{\pi _n}\left( {{n_i},{\mu _i},{\mu _j}} \right)}}{{\iint {{\pi _n}\left( {{n_i},{\mu _1},{\mu _2}} \right)d{\mu _1}d{\mu _2}}}}$$
Now let $S$ be the set of all $\left( i,j\right)$ item pairs that have been compared. For each element in this set you have an $n$ (the number of people who compared the items) and an $n_i$ (the number of times $i$ won out). Hence you can go to your calculated function (or table) ${\pi _n}\left( {{\mu _i},{\mu _j}|{n_i}} \right)$ and determine the conditional joint distribution for $\mu _i$ and $\mu _j$ given $n$ and $n_i$, which we'll denote $\theta\left( \mu _i ,\mu _j\right)$.
This gives us the basis for a maximum likelihood estimator of the form $$\Theta \left( {{\mu _1},{\mu _2}, \ldots ,{\mu _m}} \right) = \prod\limits_{\left( {i,j} \right) \in S} {\theta \left( {{\mu _i},{\mu _j}} \right)}$$ such that the maximum likelihood set of means is given by $${\mu _1},{\mu _2}, \ldots ,{\mu _m} = \arg \max \Theta \left( {{\mu _1},{\mu _2}, \ldots ,{\mu _m}} \right)$$ or (what will probably be easier computationally) $${\mu _1},{\mu _2}, \ldots ,{\mu _m} = \arg \operatorname{max} \log \prod\limits_{\left( {i,j} \right) \in S} {\theta \left( {{\mu _i},{\mu _j}} \right)} = \sum\limits_{\left( {i,j} \right) \in S} {\log \theta \left( {{\mu _i},{\mu _j}} \right)} $$
Depending on the number of means you have to optimize over and the nature of $\Theta$, this optimization could still be a formidable task. But I imagine that with a goodly sized $S$, the optimum would be fairly well defined and the problem would be amenable to standard nonlinear multivariable optimization techniques.
There is certainly no guarantee of efficiency (i.e. $\Theta$ meeting the Cramer-Rao lower bound) on estimator variance, but I believe this would give you a defensible result.