I am seeking a sequence of functions $f_n:{\mathbb R}^n \rightarrow {\mathbb R}$ where $n=1, 2, 3, \ldots$ which have the following properties:
Within the unit cube, for each point $(x_1, x_2, \ldots, x_{n})$ in $[0,1]^n$ that is different from the origin, the value of $f$ is found to lie strictly between the coordinate sum and the coordinate product, i.e.
- $f_n$ is strictly below the sum: $f_n(x_1, x_2, \ldots, x_{n}) < \sum_{i=1}^n x_i$ when
- $f_n$ is strictly above the product: $f_n(x_1, x_2, \ldots, x_{n}) > \prod_{i=1}^n x_i$
At the origin, $f_n(0,0, \ldots, 0) = 0$
$f_n$ is a symmetric function of its $n$ inputs, i.e. permuting the arguments of the function does not alter its value. That is, for any permutation $\pi$ of $\{1, 2, \ldots, n-1\}$, we have
- $f_n(x_1, x_2, \ldots, x_{n}) = f_n(x_{\pi(1)}, x_{\pi(2)}, \ldots, x_{{\pi(1)}})$
Additionally, if for any partition of $n$ into $k$ positive integers, that is: $p_1+p_2+\ldots+p_k = n$, if we were to define
- $q_1 := f_{p_1}(x_1, x_2, \ldots, x_{p_1})$
- $q_2 := f_{p_2}(x_{p_1+1}, x_{p_1+2}, \ldots, x_{p_1+p_2})$
- $q_3 := f_{p_3}(x_{p_1+p_2+1}, x_{p_1+p_2+2}, \ldots, x_{p_1+p_2+p_3})$
- ...
- $q_k := f_{p_k}(x_{\sum_{i=i}^{k-1}p_i}, \ldots, x_{n-1}, x_{n})$
then we would find that $f_n(x_1, x_2, \ldots, x_{n}) = f_k(q_1, q_2, q_3, \ldots, q_k)$
Does anyone have candidates for such a countably infinite family of functions $f_1, f_2, \ldots$, or have an argument for why such a sequence of functions cannot exist? Informally, I'm looking for "a commutative associative operation that lies between + and *".
(The question began in the context of seeking ways of aggregating probability distributions. For concreteness, say you have a finite set of future outcomes $O=\{o_1, o_2, \ldots, o_r\}$, and a finite set of fortune-telling experts $E_1, E_2, \ldots, E_s$. Each expert forecasts the future as a probability distribution on $O$. You wish to aggregate these experts' distributions into a single distribution $B$ over the set $O$, and to do the aggregation in a way that is independent of the segmented order in which you look at and combine the experts' opinions. Two simplistic approaches are to take $B(o_j)$ to be (i) a renormalization of $\prod_{i=1}^s E_i(o_j)$ or (ii) a renormalization of $\sum_{i=1}^s E_i(o_j)$. Approach (i) has the drawback that because with multiplication every expert effectively has veto power, if every outcome is assigned $0$ probability by at least one expert, then renormalization fails. That leaves approach (ii). But, what else is there besides the sum and product? I think the current formulation of the problem is simpler than the application because it totally ignores renormalization. But perhaps someone will have suggestions on the original application directly?)
As Hagen says in the comments, the strict inequality $f_n(x_1, \dots x_n) > \prod x_i$ is unsatisfiable at $(1, 1, \dots, 1)$. If you weaken this condition to a weak inequality then you can take $f_n(x_1, \dots x_n) = \min(x_1, \dots x_n)$, which satisfies the strict inequality unless one of the $x_i$ is zero or all but one of the $x_i$ is equal to $1$.
Edit:
FWIW, I find this statement of the problem clearer because now it's clearer what the meaning of different possible choices are. E.g. the $\min$ suggestion I made above wouldn't be very meaningful for this application because it ignores so much of the experts' information. Also in this formulation the inequality constraints don't seem very important to me compared to the symmetry constraint.
Taking the sum and renormalizing can be produced conceptually by the following procedure: we obtain this distribution by first 1) sampling an expert uniformly at random, then 2) sampling from the expert's distribution. So that's pleasantly intuitive.
The sum can also be produced by the following argument. Suppose for simplicity that we're trying to aggregate opinions about the bias of a random coin which flips heads with some unknown probability $p$, and each expert $E_i$ has an opinion $p_i$ about this. We can imagine that each expert arrived at their opinion by flipping the coin a large number of times (which is the same for each expert), say $N$, and observing $Np_i$ heads. So to aggregate that information we behave as if we've seen all the evidence they've seen; that is, flipped the coin $\sum_i N$ times and observed $\sum N p_i$ heads. That gets us an empirical probability of $\frac{\sum p_i}{n}$, and this argument generalizes.
So the sum seems fine to me. You could also consider renormalizing something like $\sum E_i(o_j)^p$ but that would be a less principled choice, or at least I don't have a principled way to justify or derive it.