Short statement of problem : Two players play roulette at a casino. They both start with the same initial amount. Each player always plays his favorite bet each time, and stops playing as soon as he has made a positive gain. The players may vary their stakes as they wish during the game, as long as they do not bet more than they have, of course (so we assume they play optimally). The second player’s bets are more likely to be successful than the first’s, but at the same time they entail higher losses if they fail (and lead therefore more quickly to ruin). All in all, which player has the best probability of winning in the end ?
Judging from a few numerical values, I conjecture that it is the second player (note that this is obvious in the "base case" where each player plays only once). Does anybody know how to prove that, or find a counterexample ?
The (messy) details : At every spin of a French roulette, the gain $G^i$ of a single player can be modelized by the following distribution :
$$ P\Bigg(G^i=\bigg(\frac{36}{i}-1\bigg)w\Bigg)=\frac{i}{37}, \ P(G^i=-w)=\frac{37-i}{37} $$
where $w$ is the wager and $i\in\lbrace 1,2,3,6,12,18\rbrace$ (thus, $i=12$ corresponds to a "line" bet, $i=1$ to a "straight up" bet, etc). Each player enters the casino with an initial amount of $M$ units of currency, always uses his favourite $i$ and plays $n$ spins (or less if he loses all one’s money earlier) with his $i$ and according to some strategy $s$. Denote by $\pi(i,M,s)$ the probability that this strategy puts the player in a situation of positive gain at least once during the game (in other words, the probability that $\sum_{j=1}^k {G^i}_j>0$ where ${G^i}_j$ is the algebraic gain at the $j$-th spin). Since there are only finitely many choices for $s$, there is a maximal value $p(i,M,n)=\max_{s}\pi(i,M,s)$ corresponding to one (or perhaps several) optimal strategies.
My conjecture is then that $p$ is increasing in the first variable, i.e. $p(i,M,n) < p(j,M,n)$ whenever $i < j$ and $n\geq 1$ ?
The $p(i,M,n)$ themselves are computable for a given $(i,M,n)$ (the whole thing reduces to the solution of a $(M+1)\times(M+1)$ stochastic equation), but it seems that there is no simple closed form.
Completely ignoring the main part of the question, let me address optimal strategy for the game you have described: to win if you end up with more than you started, in a game where your expected value is negative.
The general principle is this: For each bet, you should bet the absolutely smallest stake possible such that if you win the bet then you win the game.
Following this principle, there is some number $k$ of consecutive losses that will bankrupt you, that depends on the precise probabilities involved, your initial stake, and the size of your bets. Then, if you win any bet in any of the first $k$ rounds, you will win the game. If you do not, then you will be bankrupt.
Curiously, if the game is not limited to real-world currency, then by choosing your initial bet small enough you can make the probability of victory as large as you like! For example, if my stake is a dollar, but my first bet is $2^{-100}$ dollars, then $k=99$ and I will lose my stake only once in $2^{99}$ times. The rest of the time, I will win (the princely sum of $2^{-100}$ dollars) .