I have two probability distributions A and B.
First I would like to estimate how much they differ. In this regard I use as metric the Jensen–Shannon distance (i.e. the square root of Jensen–Shannon divergence).
This metric is bounded between 0 and 1.
If the probability distributions differ less than 10% (i.e. d<0.1) I would like to create a "super probability distribution" that ensemble the two. Is there a way to do that? I guess that averaging the 2 probability distributions is not the right choice...
EDIT: Plase consider the case of having 3 (or more) probability distributions A B C with the respective pairwise distances (ab,ac,bc) all < 0.1 and that the resulting "super probability" should tend to the average of the probabilities that differ less...
One possible solution without mixing is to use averaging of parameters. For instance, if you have several normal distributions with means $\mu_1,\mu_2,...$ and standard deviations $\sigma_1,\sigma_2,...$, you can calculate the average out of them and use $\bar{\mu}$ and $\bar{\sigma}$ for the super-distribution.
Another possibility is to calculate average of cumulative density functions.
Note that both approaches coincide with mixing for discrete sets where the probability mass functions are practically real vectors.