Shuffle a poker deck between 4 players, with least required entropy

274 Views Asked by At

We are shuffling a standard poker deck of 52 cards between 4 players, each getting 13 cards. The order of cards for a particular player does not matter.

A naive algorithm is to first shuffle the whole deck, then split it into 4 parts. However, this requires $log_2(52!) \approx 226$ bits of entropy.

Combinatorics says that such a shuffle should only require $log_2(52! / 13! / 13! / 13! / 13!) \approx 96$ bits of entropy, more than half less.

What is the algorithm that would produce such an efficient shuffle, using a random generator that produces integers in a range?

I was thinking of this algorithm:

  • Start with a sorted deck, for each card:
    1. generate a number between 1 and 4, to determine which player the card goes to.
    2. once one of the players gets 13 cards, only generate numbers between 1 and 3 (and so on)
    3. Stop when only one player remains, give them all undealt cards

Would such algorithm produce an unbiased deal, and if not, what is the correct one?

4

There are 4 best solutions below

1
On BEST ANSWER

Your approach won't work because: Let's suppose that the descks have $13-a$, $13-b$, $13-c$, $13-d$ cards respectively. You can assign the rest of the cards in $\frac{(a+b+c+d)!}{a!b!c!d!}$ ways. If you assign the card to the first deck, you can assign the rest in $\frac{(a+b+c+d-1)!}{(a-1)!b!c!d!}$ ways. For the second deck, it is $\frac{(a+b+c+d-1)!}{a!(b-1)!c!d!}$. $\frac{(a+b+c+d-1)!}{a!b!(c-1)!d!}$ for third deck and $\frac{(a+b+c+d-1)!}{a!b!c!(d-1)!}$ for fourth deck. Therefore, you can't assign the next card with equal probabilities. You can easily figure out that the ratio of the probabilities has to be $a:b:c:d$. The easiest way to choose one deck with these probabilities is choosing a random integer from $1$ to $a+b+c+d$ and then look in which deck it falls. The upper bound for your random generator starts at 52 and drops by 1 each time you assign a card to a deck. Now the bits of entropy is: $\sum_{i=1}^{52}log_2(i) = log_2(52!)$. So if you fix your algorithm this way, you are back at the naive generator.

You can do it this way:

  1. generate a random number $k$ between $1$ and $\frac{52!}{13!13!13!13!}$.
  2. set $a,b,c,d:=13$, $u:=\frac{52!}{13!13!13!13!}$
  3. if all cards are assigned to a deck, end.
  4. set $g:=\frac{u}{a+b+c+d}$
  5. if $k \leq ga$ put card to first deck, set $a:=a-1$, $u:=ga$, return to step 3
  6. else if $k \leq g(a+b)$ put card to second deck, set $b:=b-1$, $u:=gb$, $k:=k-ga$, return to step 3
  7. else if $k \leq g(a+b+c)$ put card to third deck, set $c:=c-1$, $u:=gc$, $k:=k-g(a+b)$, return to step 3
  8. else put card to fourth deck, set $d:=d-1$, $u:=gd$, $k:=k-g(a+b+c)$, return to step 3

It is basically the same with the exception that you don't generate a new number in each step, but you decide according to a number which you generate at the beginning. It is basically an unrank function for permutations with repetitions.

2
On

You can use a kind of binary search process to decide which player receive each card:

  • The cards can be in any order, including totally sorted.

  • Keep a tally of how many cards each player can still receive. Initially, each player can receive 13 cards apiece; these numbers will change as you deal out cards, and won't always be equal.

  • You can write out those tallies in a diagram like this:

    □□□□ | □□□ | □□□□□ | □□

    This is a diagram of a situation where Player 1 needs four more cards, Player 2 needs three more cards, Player 3 needs five more cards, Player 4 needs two more cards.

  • To assign the next card to a player, you could pick a random position in this diagram list; each position is owned by one of the players. To pick a position in this list, you could use binary search: Flip a coin to decide whether the position is in the first or second half of the list. Flip another coin to pick whether the position is in the first or second half of that sublist. And so on, until the range narrows to a single position. Then, assign the card to whichever player owns that position in the list.

    However, because order within each partition doesn't matter, you can terminate the binary search early. You can terminate it as soon as you know for sure which player's partition the binary search will land in. The left and right endpoints of the sublist will both lie within a single player's partition. So you assign the card as soon as you know which player it belongs to.

  • After assigning the card, update the diagram list and repeat with the next card. Repeat until all cards are assigned.

  • By giving each player a number of chances equal to the number of cards remaining, you ensure that the partitions are uniformly random. By using binary search and terminating as early as possible, you guarantee that you use up the minimal amount of entropy.

1
On

Let's approach this problem backward.

So shuffle the deck, deal the cards, have each player sort their hand. From this state, how much information can we gather?

As you already pointed out, we cannot retrieve the order of the deck after shuffling, before dealing. A shuffled deck contains more information than four hands of cards.

So how much information is there? For simplicity, we call the cards just card $1$, card $2$, etc. Imagine, each player sorted their hands and put the cards in front of them, face down, the lowest card on top. Then we know, that one of the top cards must be card $1$, and the probability is $1/4$ for each one. So the question "Which hand contains card $1$?" is like rolling a four-sided fair dice.

Let us say, Player 1 had card 1. We remove the card from their stack and ask "Which hand contains card 2?". Again, it must be one of the top cards, but the probabilities are no longer equal since Player 1 has just 12 cards while the others have 13. So this question is equivalent to rolling four-sided dice, where one side has a slightly lower probability.

We can continue asking for each card in turn to retrieve as much information as we can.

Reverting this process gives us a means of distributing the cards: Take a fair four-sided dice, label the sides Player 1 to Player 4. Roll the dice and give card 1 to the respective player. Then adjust the probabilities of the dice to 12/51 for the player with card 1 and 13/51 for the others. Continue rolling and adjusting until all cards have been distributed.

To sum up, we have a means of distributing the cards and a way of looking at a dealt deck containing the same information, so the distribution method should be optimal in this sense.

0
On

Cards are numbered 0..51 Full shuffle:

//int random(int min, int max)

int cards[52]; //fill cards[] in any sequence
For(i=0; i<51; i++) {
  int r = random(i,51) // r is random in i..51
  swap(cards[i], cards[r];
}

Give 13 cards to each player in any sequence. BUT, the question asked for efficiency. Change ‘i<51’ to ‘i<51-13’ and the first 39 cards are well shuffled out of the entire deck. Shuffling the remaining cards adds nothing.

Leave cards[] alone and shuffle again for subsequent games.

But now all you need is minimum entropy to perform the needed 39 random(min, max) operations. Off the top of my head…

int E = log<sub>2</sub>(39!) round up.
Make F, an E-bit random binary fraction with the binary point on the left.
For each random(min,max)
  F*=(max-min+1) // i.e. F*=range
  r=int(F)
  F-=r
  return r