Consecutive fails with changes on probabilities

144 Views Asked by At

In an online game you can bet your money on the success of an event. You start with a probability p = 5% of success and every time the event fails you gain a 1% of success for the next attempt. For instance, after one fail you have a probability of success of 6%, after two fails you have a probability of success of 7% and so on... If you succeed, the probability is restored to 5% and the game basically restart. If you reach 45 fails, you no longer gain the 1% more probability, i.e., there's a cap at 50%. You can bet two fixed amount of money, 1 or 10. If you win, you receive 10 times the value you betted, otherwise you lose the amount you betted.

From my understanding of the game, the goal is to raise the chance with a 1 (coin/dollar/euro, what you prefer) bets and then increase your bets to 10 coins in order to get the reward of 100 coins.

What I am really interested is how to model the probability of n consecutive fails, how to model the probability cost of reaching such n with 1 coin bets and how to model the probability cost of reaching the success with 10 coin bets having reached a given probability with a 1 coin bets.

I think it's not an easy question. From my knowledge, the only point i can answer is the first, which is that the probability of having n consecutive fails is (1 - p) x (1 - p + 0.01) x (1 - p + 0.02) x ... x (1 - p + (n/100)) but i'm not totally sure because the probability is changing over attempts.

Any suggestion on what to study/read to answer my questions is appreciated as a solution.

1

There are 1 best solutions below

6
On BEST ANSWER

Let $P_1(n)$ denote the probability of $n$ consecutive fails. Then \begin{equation} P_1(n)=0.5^{\max \lbrace n,45\rbrace -45} \cdot\prod_{k=1}^{\min\lbrace n,45\rbrace } (1 - 0.05-0.01\cdot n). \end{equation} But independently of what the probability of $n$ consecutive fails is, it is easy to see (unless I'm making some mistake) what the optimal strategy to play this game is: Bet as little as you can in each round so long as the expected value of your bet is not positive, and start betting as much as you're allowed as soon as your expected value is positive. One can rapidly show that this strategy gives you the maximum possible expected value. (Well, either this one or start betting as much as you can as soon as your expected value becomes non-negative, I haven't checked).

Calculating the expected value of this strategy is done as you would with any other strategy: The probability of winning at the n-th bet (and thus losing in all previous bets) is given by $P_2(n)=(0.05+0.01n)\cdot P_1(n-1)$ for $1\leq n\leq 45$, $P_2(0)=0.05$ and $P_2(n)=0.5\cdot P_1(n-1)$ for $n>45$. With these values determined we need only calculate how much we win/lose in each situation. If we win in the zeroth round, we'll have made a profit of $9$, in the first round a profit of $8$, ... in the fifth a profit of $4$, in the sixth (where we start betting 10 units each round because our probability of winning this round is > 10%, so we expect to profit from our bet) a profit of $93$, in the seventh a profit of $94$,... you get the gist.

Our expected value will be \begin{equation} E=\sum_{n=0}^5 (10-n)\cdot P_2(n) + \sum_{n=6}^\infty (100-(n-6)\cdot 10-6)\cdot P_2(n). \end{equation} Truly a pain to calculate. This is not my area at all, I know very little of either probability theory or game theory, so there probably is a more elegant way to arrive at a solution than this.