Suppose I have a series of variables $X_1, X_2, \ldots, X_n$. I have $S = \sum_i^n X_i$. Now suppose that I have a constraint on the distribution of S, for example from some data.
Looking at any of the $X$ values, Bayes theorem gives that:
$P(X_j \mid S, X_{-j}) \propto P(S \mid \{X_i\} )\, P(X_j)$. This means that if you know all of the other $X$ values other than $X_i$ and $S$ you can sample $X_j$. This is basically just Gibbs sampling if you iterate over the values. The problem is that this is not very efficient - the sum constraints mean that different $X_i$ values will be anticorrelated with each other
You can try drawing from the joint distribution: $P(\{X_i\} \mid S) \propto P(S \mid \{X_i\}) \, P(\{X_i\})$, but this is requires for example the Metropolis Hastings algorithm with all the commensurate tuning.
I am wondering if there is some efficient way to do this sampling, perhaps by drawing a value of $S$, and then conditional on that sampling the other variables conditional on this. This requires implementing a sum constraint on the problem, however, and I don't know how to do this.
Any suggestions?
I'm happy to make the assumption that $\{X_i\}$ and $P(S)$ are all normally distributed, and even independent if necessary.