Optimal strategy in betting game

1.4k Views Asked by At

You and your friend are playing a game. You both start with a score of $0$. Also, you both start with $\$1$. At each step, you're allowed to bet a fraction of your $\$1$, and whoever bets more money wins that "round". However, whoever wins the round (bets more money) loses whatever they bet, and whoever loses that round (bets less money) keeps whatever they bet. If you win, your score increases by $1$; if you lose, your score decreases by $1$. The game terminates when a player gets a score of $-3$ (they lose) or $+3$ (they win). If you bet the same amount of money as your opponent, then your opponent wins.

What's the optimal amount of money you should bet in the first round?

I was asked this question for a quantitative research position, but I couldn't solve the problem. They seemed to suggest that the answer was irrational, but I still can't figure it out. Does anyone have any ideas?

Some extreme strategy like betting everything in one round isn't feasible since you lose the money if you win.

2

There are 2 best solutions below

4
On

I would argue that the optimal initial betting is $\frac{2 - \sqrt{2}}{2} \sim 0.29$.

To do so, we need a few arguments:

  1. Whatever the initial bet is that you use to go from $0$ to $+1$, the second bet will be the same. Suppose not. Let $x$ be the bet you made to go from $0$ to $+1$, and let $y$ be the bet you make to go from $+1$ to $+2$, with $y = x + \epsilon$ where $\epsilon > 0$. Now, your opponent initially will bet $x + \epsilon/ 4$, and then bet $x + \epsilon/ 2$. Now, either you:

a) Let your opponent win the first round, and then win the second (in which case, now, the money in game is $(1 - (x + \epsilon/2), 1 - (x + \epsilon /4))$, so your opponent gained a net advantage. You are doomed to lose if you repeat this strategy.

b) Your opponent lets you win both rounds, so you are now at $+2$ with $(1 - (x + y), 1)$

c) You let your opponent win both rounds, so your opponent is now at $+2$ with $(1, 1-(2x + 3 \epsilon / 4))$.

Since you want to make sure you are doing as well as your opponent, you need to pick $y$ arbitrarily close to $x$.

Once you reach $+2$ with $(1 - 2x, 1)$, your opponent has to counter this by paying $(1 - 2x)$, so now, we are back at $+1$ with $(1 - 2x, 2x)$.

Note that the game is scale invariant, and if you ended up strictly better up to scaling, your opponent could have copied you, and if you ended up strictly worse, then you will just have to repeat the same game, and will eventually lose, so we assume that the proportions of money is equal i.e. $(1 - x, 1) \propto (1 -2x, 2x)$ so we let $1 - x = \frac{1 - 2x}{2x}$ and solve the quadratic for an $x$ value less than 1, which gives us the answer mentioned before.

2
On

Denote by $(a, b, \sigma)$ the game state where you have $\$a$, your opponent has $\$b$, and your score is $\sigma$. Clearly the game is scale-independent, so the ratio of your money to your opponent's is all that matters to determine the winner. Also, clearly it can only help you to have more money at any point; so there must be exactly one transition (as the ratio is increased) from losing to winning for each score. Let $R_\sigma$ be the critical ratio when your score is $\sigma$; that is, if the ratio is larger, you win, and if the ratio is smaller, you lose. By symmetry, $R_0=1$. (The winner at the critical ratio for each score could still go either way; certainly it depends on the tie-breaker rules, while the rest of this analysis does not.)

First consider $\sigma=+2$. You win games $(r,1,+2)$ with $r > 1$; for $r \le 1$, your opponent can (must) match your bet, which should therefore be as large as possible, leading to the game $(r,1-r,+1)$, which goes from losing to winning at $r/(1-r)=R_1$. So $R_2/(1-R_2)=R_1$, or $R_2=R_1/(1+R_1)$.

Now consider $\sigma=+1$; you again clearly win for $r > 1$, but for $r \le 1$, your opponent can choose whether to match your bet or not. Let $s(r) \le r$ be your bet. Then your opponent can choose whether the next state is $(r-s(r), 1, +2)$ or $(r, 1-s(r), 0)$. The latter is a loser for you if $r < 1-s(r)$, so you must have $s(r) \ge 1-r$. The former is a loser for you if $r-s(r)< R_2$, so you must also have $s(r) \le r-R_2$. These conditions become incompatible (that is, your opponent can always win) if $1-r>r-R_2$, or $r<(1+R_2)/2$. We conclude that $R_1=(1+R_2)/2$. Combining this with the previous relation between $R_1$ and $R_2$, we find that $R_2=-1+\sqrt{2}$, and that $R_1=\frac{1}{2}\sqrt{2}$.

Finally, let's determine your optimal bet from $(1,1,0)$. If you bet $s$, your opponent can choose whether the outcome is $(1-s,1,+1)$ or $(1,1-s,-1)$. One of these is a clear loser unless $1-s=R_1$, or $$s=1-R_1=1-\frac{1}{2}\sqrt{2}.$$ This is your optimal initial play.

Note that because of the unfair tiebreaker rules, your opponent will win this game... if you bet exactly the critical amount each round, then your opponent will exactly match your bets and win in three rounds. However, there is a silver lining: you can draw out your defeat, making your opponent spend an arbitrarily large number of rounds reaching her inevitable victory. This can be done by betting your opponent's wealth plus a sufficiently small $\varepsilon$ when the score reaches $-2$.