Swapping the $i$th largest card between $2$ hands of cards

164 Views Asked by At

Introduction to "game"

There are $2$ players and $2n$ cards, labelled $1, 1, 2, 2, 3, 3, 4, 4, \dots, n, n$. ($2$ of each card from $1$ to $n$)

Firstly, the $2n$ players are each given $n$ cards randomly. More specifically, a random ordering of the $2n$ cards is achieved and the first player gets the first $n$ cards, the second player gets the other $n$ cards. We can assume each player can see all the cards.

On every turn, each player will sort his hand of cards. Label the cards in the hands $a_1, a_2, a_3,\dots, a_n, b_1, b_2, b_3,\dots, b_n$.

The players find a pair of cards $a_i\neq b_i$. Then they exchange the cards $a_i$ and $b_i$. After which, the turn ends.

Clearly if $a_i=b_i$ for all $i$, then $a_i=b_i=i$, and the "game" terminates.

Example

Let $n=4$. We call the $2$ players $A, B$. At the start of the game, they receive:

$A: 1, 1, 2, 2$

$B: 3, 3, 4, 4$

Turn 1: They decide to swap the first card.

$A: \color{red}{3}, 1, 2, 2$

$B: \color{red}{1}, 3, 4, 4$

Before the second turn starts, they sort their cards.

$A: 1, 2, 2, 3$

$B: 1, 3, 4, 4$

Turn 2: This time, they decide to swap the third card. They cannot swap the first card as the numbers on the first card are the same.

$A: 1, 2, \color{red}{4}, 3$

$B: 1, 3, \color{red}{2}, 4$

They sort their cards.

$A: 1, 2, 3, 4$

$B: 1, 2, 3, 4$

The "game" ends as the players both have $1, 2, 3, 4$. This game ends in $2$ turns.

Questions about this "game"

Will it terminate? (The answer is yes, and an idea of a bound based on the monovariant will be added as a self-answer.)

In how many moves (at most) does it take for a $2n$ card game to terminate? What arrangements will cause it to take such a long time to terminate?

Assuming that the players pick the next card to exchange randomly (among all the possible moves, choose a valid move, each valid move with equal probability), what is the expected number of moves for the game to terminate?

Ideas about the question:

As suggested by hardmath:

If we constrain the players to take the lowest card that works or the highest card that works, the game will definitely end in at most $n-1$ turns. Both cases are symmetrical, so we work on highest.

Base case: if $n=1$, no moves have to be done. Otherwise, $n>1$.

We now proceed on the inductive step. Suppose the highest cards of the players are different. If they are the same, we delete them, reducing to the $n-1$ case, which can be done in at most $n-2$ moves. Otherwise, one player has both the highest card. When the move is done, a copy of the highest card is given to the other player. Now, both players have the highest card, and it can be deleted. In total, there are $1+(n-2)=n-1$ moves for this.

From this, we can obtain that on average, the game ends in $O(n^2)$ turns, as it takes (on average) $O(n)$ turns for the highest card to be swapped.

Code (for reference):

I ran a Monte Carlo simulation of random games. A simulation of $1000$ games of $1000$ cards gave an average number of swaps of $1310.703$.

C++ code for reference:

#include <cstdio>
#include <algorithm>
#include <cmath>
using namespace std;
int N;
int K;
int maxCounter = 0;
int sumCounter = 0;
int counter;
int arr[999999];
/** Results:
(X cards, 10^6 rounds)
1 -> 0 0
2 -> 1 0.334546 (guess: 1/3)
3 -> 2 0.733746
4 -> 3 1.180559
5 -> 5 1.667006 (guess: 5/3)
6 -> 6 2.179321
7 -> 8 2.721661
8 -> 9 3.283566
9 -> 10 3.867299
10 -> 12 4.471352
Other things I tried: (cards, rounds)
100 1000 -> 143 84.411
200 1000 -> 307 197.332
300 1000 -> 503 319.691
500 1000 -> 960 580.373
1000 1000 -> 1910 1310.703
**/
int main(void) {
    srand(23);
    printf("Number of cards: ");
    scanf("%d", &N);
    for (int i=0;i<N;i++) {
        arr[i] = arr[i+N] = i;
    }
    printf("Number of rounds: ");
    scanf("%d", &K);
    for (int j=0;j<K;j++) {
        random_shuffle(arr, arr+2*N);
        counter = 0;
        while (counter>=0) {
            sort(arr, arr+N);
            sort(arr+N, arr+2*N);
            int pass = 1;
            for (int i=0;i<N;i++) {
                if (arr[i] != arr[i+N]) {
                    pass = 0;
                    break;
                }
            }
            if (pass) break;
            counter++;
            int theMove = rand()%N;
            while (arr[theMove] == arr[theMove+N]) theMove = rand()%N;
            int temp = arr[theMove];
            arr[theMove] = arr[theMove+N];
            arr[theMove+N] = temp;
        }
        //printf("%d ", counter);
        maxCounter = max(maxCounter, counter);
        sumCounter += counter;
        if (j%(K/100) == 0) printf("Done: %d\n", j);
    }
    printf("Maximum: %d\nAverage: %lf\n", maxCounter, sumCounter/(double)K);
    return 0;
}

Results of the code:

The average number of moves appear to grow faster than $O(n)$.

2

There are 2 best solutions below

1
On

This is not a complete answer to the question yet.

The monovariant $\sum i(a_i+b_i)$ was suggested by Michael in the comments.

When the swap is performed, the monovariant remains the same.

The monovariant will increase when any one of the $2$ lists become unsorted after the swap. We try to prove this is the case.

We break into $2$ cases:

Case 1: $|a_i-b_i|>1$

We consider where the cards labelled $a_i+1$ are placed. They either exist as cards $a_j$ where $j>i$ or $b_k$ where $k<i$. In both subcases in case 1, there will be a rearrangement since the hands are not ordered after the swap.

Case 2: $|a_i-b_i|=1$

Without loss of generality, let $a_i-b_i=1$. Then we consider the cards labelled $a_i$ and $b_i$. For convenience, label the cards $x$, $x+1$.

$\begin{matrix} A:&\dots&x+1&\dots\\ B:&\dots&x&\dots \end{matrix}$

Now we consider the positions of the other $2$ cards labelled $x$, $x+1$.

Considering all the cards labelled $1, 2, 3,\dots, x-1$, there are an even number of such cards. All these cards must appear before the cards $x, x+1$ in both lists.

For the unplaced $x, x+1$, if only one appears before the $i$th position, then there are an odd number of positions for an even number of cards, which is impossible.

Hence either both appear before the $i$th position or after the $i$th position.

There are 2 cases (blue: cards to swap, red: after swap):

$\begin{matrix} A:&\dots&\color{blue}{x+1}&x+1&\dots\\ B:&\dots&\color{blue}{x}&x&\dots \end{matrix}\rightarrow\begin{matrix} A:&\dots&\color{red}{x}&x+1&\dots\\ B:&\dots&\color{red}{x+1}&x&\dots \end{matrix}$

$\begin{matrix} A:&\dots&x+1&\color{blue}{x+1}&\dots\\ B:&\dots&x&\color{blue}{x}&\dots \end{matrix}\rightarrow\begin{matrix} A:&\dots&x+1&\color{red}{x}&\dots\\ B:&\dots&x&\color{red}{x+1}&\dots \end{matrix}$

Exhausting both cases proves the monovariant.

Hence, it has been proven that this terminates.

The ending value of the monovariant is $\sum i(2i) = \frac{n(n+1)(2n+1)}{3}$.

The starting value of the monovariant is at least $\sum 2i(n+1-i)$ (the 2 hands are $n, n-1, n-2, \dots, 3, 2, 1$, which is minimal by Rearrangement Inequality).

0
On

Here is a plan with $O(n\log n)$ moves.
Start with $n=2^{m}-1$. Let one hand be $1,1,2,2,...,2^{m-1}$, so the other hand begins with the other $2^{m-1}$.
In $(n-1)/2=2^{m-1}-1$ moves, swap the cards in the middle row every time.
You will end with both cards labelled $2^{m-1}$ at the middle, with the higher cards forming a similar structure above $2^{m-1}$ and the lower cards forming a similar structure below $2^{m-1}$. So one hand has the lowest quartile of cards, and the other hand has the highest quartile of cards. For example, you would have $1,1,2,4,5,5,6$ in one hand and $2,3,3,4,6,7,7$ in the other.
Then repeat the process, for the higher cards and for the lower cards. The total number of moves is $M(m)=2M(m-1)+2^{m-1}-1$.
2 moves are possible for $n=3$, so $M(m)=(m-\frac32)2^{m-1}+1$ for $n=2^m-1$.