We have a list of players $P_1\ldots P_n$ and a set of $n$-vectors $S$, which always contains the $0$-vector (representing no deal). Each vector $(s_1,s_2,\ldots,s_n)$ in the set represents a deal that the players can jointly make, giving $P_1$ the score $s_1$, $P_2$ the score $s_2$ and so on. Each player publicly announces a probability distribution of vectors in $S$ and gives each player the option to announce a new one in response to the change. This continues until no player wishes to change (and announce) a new distribution. Once each player has made their final choice, each player selects a vector randomly from their distribution: i.e. f $P_a$ chose $D_a$ such that $D_a(w)=0.5$ $P_a$ will have a $50$% probability of choosing $w$. If all players make the same choice $v = (v_1,v_2,\ldots,v_n)$ each player will be given a score in the manner defined above. If there is not consensus - i.e. not all players choose the same vector - each player gets the score $0$. If all players attempt to maximise their score, what is the (best) Nash equilibrium for choosing a deal?
EDIT: (Changed from secretly choosing a vector to publicly announcing a random distribution of them and added a few conditions, hope these make my problem clearer/well-defined.)* Each player will make a random choice from a probability distribution of moves, attempting to maximise their average score. The $n$ distributions $D_1 \ldots D_n$ selected will be the only ones such that if a player were allowed to publicly announce and change theirs in response to any other player's change, the players would not settle for any other choice.
Example 1: If two players play rock-paper-scissors (where winning gives 1 pt.,drawing 0 and losing -1 pts.) and they select their moves in the above manner, the equilibrium distributions for both players are $D(rock)=D(scissors)=D(paper)=1/3$ as for every other pair of distributions a player will always gain by choosing a new one.
Also, the method $M$ that finds the distribution for each player must not change based on what order the players and their scores are listed in: $M([P_1,P_2],set((a,b),(x,y),(0,0)))=(D_1,D_2) =>M([P_2,P_1],set((b,a),(y,x),(0,0)))=(D_2,D_1)$
If two moves $A$ and $B$ are equivalent to each other for a player with probability distribution $D$, $D(A)$ must equal $D(B)$: the names of moves or the order in which they are placed must make no difference.
Example 2: Alice, Bob and Charles play a game with 4 moves for each player: $x,y,z$ & $w$. If Alice chooses $x$ or $y$ and Bob chooses the same, they each gain $1$ pt. and Charles gains $0$. If Alice chooses $z$ or $w$ and Charles chooses the same, they gain $1$ and Bob gains $0$ pts. All moves are equivalent for Alice (but not for Bob!) because she can switch the names of Bob and Charles and $x,y,z,w$ in the game's description and still play the "same game". Therefore she must play each move with equal probability and may not say to Bob: "You play x and I will too.". The same reasoning can be used to explain why the moves in example $1$ all have the same equilibrium probability.
*I believe that each player publicly announcing their distribution of moves until no player chooses to change, so long as "equivalent" moves for a player have the same probability is equivalent to no information being exhanged as long as every player plays optimally.
Sorry about the wall of text, that's as concise as I could get it!