Better to bet $\$50$ once or $\$25$ twice?

525 Views Asked by At

Say there is a situation in which you are going to bet a total of $\$50$ and you want to make $\$100$. The probability of winning any bet is always a coin flip ($50\%$).

Is it better to bet ($1$) the $\$50$ all at once, which gives you a single $50\%$ chance of making the $\$100$, or is it better to bet ($2$) as follows:

  • You first bet $\$25$ (leaving you with $\$25$ total), if you win, you then bet the $\$50$ that you now have. This gives a $50\% \times 50\% = 25\%$ chance of winning the first round of bets. If you win, you get to keep the other $\$25$.
  • If you lose, you now have to bet the rest of the $\$25$ using the same strategy where you hope to win the first bet and then win the second bet to win a total of $\$100$.
1

There are 1 best solutions below

0
On BEST ANSWER

Assuming from your statement that: 1-The goal is to get \$100 (or more) 2-Getting more than \$100 is indifferent to getting exactly \$100.

The wording "better" is rather imprecise, but I'll interpret as "Which strategy has higher success probability ?"

Then for strategy 1, as you point out, there is a 50% sucess rate.

For strategy 2, I'm assuming it reads as follows: "bet \$25 if you have \$50 or less, bet \$50 if you have \$75".

Then, the problem is that this strategy may cause the game to go on indefinitely. But let's take a look:

When you have \$50, after 2 tosses, there is: a 25% chance you'll lose a 25% chance you'll win a 25% chance the game will restart at \$50 a 25% chance you'll have \$25.

When you have \$25, you have: a 50% chance of restarting at \$50 a 50% chance of losing.

Say that your cash after the n-th toss is $c_n$, then, translating the previous observations:

$P(c_n=125|c_{n-2}=50) = 0.25$

$P(c_n=50|c_{n-2}=50) = 0.25$

$P(c_n=25|c_{n-2}=50) = 0.25$

$P(c_n=0|c_{n-2}=50) = 0.25$

$P(c_n=50|c_{n-1}=25) = 0.50$

$P(c_n=0|c_{n-1}=25) = 0.50$

$P(c_n=0|c_{n-1}=25) = 0.50$

Then, we can build all cases of condition probabilities:

$P(c_n= 0|c_{n-1}= 0) = 1.0$

$P(c_n= 0|c_{n-1}= 25) = 0.5$

$P(c_n= 0|c_{n-1}= 50) = 0$

$P(c_n= 0|c_{n-1}= 75) = 0$

$P(c_n= 0|c_{n-1}=125) = 0$

$P(c_n=25|c_{n-1}= 0) = 0$

$P(c_n=25|c_{n-1}= 25) = 0$

$P(c_n=25|c_{n-1}= 50) = 0.5$

$P(c_n=25|c_{n-1}= 75) = 0.5$

$P(c_n=25|c_{n-1}=125) = 0$

$P(c_n=50|c_{n-1}= 0) = 0$

$P(c_n=50|c_{n-1}=25) = 0.5$

$P(c_n=50|c_{n-1}=50) = 0$

$P(c_n=50|c_{n-1}=75) = 0$

$P(c_n=50|c_{n-1}=125) = 0$

$P(c_n=75|c_{n-1}= 0) = 0$

$P(c_n=75|c_{n-1}=25) = 0$

$P(c_n=75|c_{n-1}=50) = 0.5$

$P(c_n=75|c_{n-1}=75) = 0$

$P(c_n=75|c_{n-1}=125) = 0$

$P(c_n=125|c_{n-1}= 0) = 0$

$P(c_n=125|c_{n-1}= 25) = 0$

$P(c_n=125|c_{n-1}= 50) = 0$

$P(c_n=125|c_{n-1}= 75) = 0.5$

$P(c_n =125|c_{n-1}=125) = 1$

And we have a Markov process, quite interesting. The state transition matrix is given by:

$$X_n = M X_{n-1}$$

with:

$$ M = \begin{bmatrix} 1 & 0.5 & 0 & 0 &0\\ 0 & 0 & 0.5 & 0 &0\\ 0 & 0.5 & 0 & 0.5 &0\\ 0 & 0 & 0.5 & 0 &0\\ 0 & 0 & 0 & 0.5 &1 \end{bmatrix} $$

We need to find the limit of the sequence $M^n$ as $n$ goes to infinity, then recall that $X_0=[0,0,1,0,0]$ (i.e. $c_0=50$).

Diagonalizing $M$ we have: $$ M=V D V^{-1} $$ Therefore: $$ M^n=V D^n V^{-1} $$

Since D has two unit eigenvalues and the others have norm less than $1$, then define $W=lim D^n \, (n \to \infty)$. Then $W$ will have two unit values in the diagonal and all other terms will be zero.

The probabilities as the number of tosses go to infinity are given by:

$$ X=VW V^{-1}*[0,0,1,0,0]^T $$

And surprisingly, the probability of loosing is 60% against a probability of winning of 40%. Therefore, the first policy was better all along.

Note that if the question was "which policy has the higher expected return" then 0,5*100=50=0,4*1,25. So both policies however have the same expected return.