Optimal coin position to win dice game (generalisation)

110 Views Asked by At

I can't stop thinking about this recent question:

You have 3 coins and a die. There is a line with positions from 1 to 1000. you need to place the coins in any of these positions in the line. from position 0, you keep rolling the die and you jump that many steps in the line. if you reach any position with a coin in it, you win. If you cross 1000 without reaching any positions with a coin, you lose. What are the optimal positions to place the coins in the line to maximize your chance of winning the game?

The die is $6$ sided and fair. An obvious guess is that the coins should be placed at $4,5,6$, but there's no clear way to prove this by hand. @Joriki solved it with the observation that coins can't be placed more than $5$ apart, and a computer search, and indeed this is the best arrangement. This is good, but doesn't satiate me, as the problem seems very neat, so I want a generalisable solution. So I would like to know if this arrangement still works with $c$ coins and a $n$ sided die.

I.e. is the strategy of placing the coins at $n-c+1, \dots , n$ always optimal for any $c$ and $n$?


I haven't been able to make much progress, so this can be ignored, but here are my observations:

If we say $y_s$ is the probability of the die landing on position $s$, then $y$ satisfies a recurrence relation $$y_s = \frac{1}{n}y_{s-1} + \dots + \frac{1}{n}y_{s-n}$$ This means we can find a formula for $y_s$ which looks like $$y_s = \sum_{i=1}^{n} a_i \lambda^s$$ Where $a_i$ are constants that are computed using the initial conditions and $\lambda_i$ are the roots of the auxilary equation $$\lambda^n - \frac{1}{n}\lambda^{n-1} - \dots - \frac{1}{n}\lambda - \frac{1}{n} = 0$$

Let $x_1 , \dots x_c$ be the positions of the coins, then the probability of hitting any of them can be $$P(X_1 \cup \dots \cup X_c) = \sum_{i=1}^c P(X_i) - \sum_{i<j} P(X_i \cap X_j) + \dots + (-1)^{c-2} \sum_{1 \le i_1 < \dots < i_{c-1} \le c} P(X_{i_1}) \cap \dots \cap P(X_{i_{c-1}}) + (-1)^{c-1} P(X_1 \cap \dots \cap X_c)$$ By the inclusion-exclusion principle. And the probability of any of the intersections can be computed as $$P(X_{i_1} \cap \dots \cap X_{i_k}) = P(X_{i_1})\cdot P(X_{i_2}|X_{i_1}) \cdot \dots \cdot P(X_{i_k}|X_{i_{k-1}})$$ Where, because the die roll is autonomous, $P(X_j|X_i)$ is just the probability $y_s$ of landing on square $s$ where $s = x_j - x_i$

So we can in principle substitute all this in to get a massive unwieldy formula for the probability of winning with a given combination of coin positions. However, the formula doesn't seem to lend itself to discrete maximisation. When written in terms of $y_s$ it looks symmetric enough that there might be a nice way to simplify it, but I don't know it.

1

There are 1 best solutions below

2
On

Edit : As Zoe Allen pointed below, this argument is wrong as this greedy placement does not take into account the reduction of hitting probability..


I am thinking something along this line. First, we will prove that there is an optimal placement that contains $n$. Given this condition, we show that there is an optimal placement which contains $\{n,n-1\}$, and so on until you run out of coins.


So to prove that there is an optimal placement that contains $n$.

Let me write $H(x; \{y_1, \dots, y_k\})$ to denote the probability of hitting $x$ given that we already put other coins on $\{y_1,\dots,y_k\}$, lets just assume that it is equal to $0$ if $x$ is in $\{y_1,\dots,y_k\}$. Consider an optimal placement $\{y^*_1, \dots, y^*_c\}$, an oracle told you a part of this optimal placement but it omits the last coin, i.e., $\{y^*_1, \dots, y^*_{c-1}\}$.

If the partial placement that the oracle told you already contains $n$, then we are done.

If it is not, then we notice that we must have $H(y^*_c; \{y^*_1, \dots, y^*_{c-1}\}) = \max_x H(x; \{y^*_1, \dots, y^*_{c-1}\})$.

We have that $H(x;\{y^*_1, \dots, y^*_{c-1}\})$ is an increasing function of $x$ for $x$ in $[1,\dots,n]$ except when $x$ is in $\{y^*_1, \dots, y^*_{c-1}\}$. So in this region, $n$ is the maximizer.

Next, we can show that $H(x; \{y^*_1, \dots, y^*_{c-1}\}) \leq H(n; \{y^*_1, \dots, y^*_{c-1}\})$ for $x > n$. To show this, we notice that in this region $H(x; \{y^*_1, \dots, y^*_{c-1}\})$ is essentially an average of the previous $c$ values.

Hence, $H(n+1; \{y^*_1, \dots, y^*_{c-1}\}) \leq H(n; \{y^*_1, \dots, y^*_{c-1}\})$ as we previously assert that this is an increasing function on $[1,...,n]$. You then simply do induction to show it for all $x >n$.

So we have that $H(n; \{y^*_1, \dots, y^*_{c-1}\}) = \max_x H(x; \{y^*_1, \dots, y^*_{c-1}\})$. In other words, there is an optimal placement which contains $n$.


Now to show that there is an optimal placement that contains $\{n,n-1\}$.

We repeat the same step, but I put a constrain that the oracle can only told me the a partial placement of the form $\{y^*_1, \dots, y^*_{c-1}\}$ with $y^*_1 = n$. A partial placement of this form must exists by our previous argument.

We see that this is an increasing function on $[1,...,n-1]$ and $H(x;\{y^*_1, \dots, y^*_{c-1}\}) \leq H(n-1;\{y^*_1, \dots, y^*_{c-1}\})$ for $x > n$. So now we know that there exists an optimal placement which contains $\{n,n-1\}$.


You basically can repeat the argument until you have an optimal placement of the form $\{n-c+1, \dots, n\}$. What we are doing here is basically some form of greedy algorithm. I bet there is a cleaner proof using some matroid argument, but nothing comes up to me right now.