Chance of losing a biased "Random Walk" game

549 Views Asked by At

Consider a game where you start with 1 point.

Then you flip a fair coin infinitely many times.

For each heads, you gain 2 points. For each tails, you lose 1 point.

What is the probability that your score never goes below 1?

edit: I am looking specifically for the answer with flipping infinitely many times, not the probability after $N$ flips as $N$ approaches infinity.

4

There are 4 best solutions below

4
On

Here is a different approach. I calculated the probability of losing if you throw exactly $h$ heads, for $h=0,1,2,\dots$

A001764 enumerates (among other things) the "Number of lattice paths of $n$ East steps and $2n$ North steps from $(0,0)$ to $(n,2n)$ and lying weakly below the line $y=2x.$" If we consider an East step as heads and a North step as tails, this is just the number of losing sequences $a_n$ with exactly $n$ heads, considering that we will toss one more tails at the end.

According to A001764, $$a_n={{3n\choose n}\over 2n+1}$$ and the probability of losing is $$ \sum_{n=0}^{\infty}a_n2^{-3n-1} $$

A July 17, 2108 comment on A001764 by Michael Somos says that $$A(1/8) = -1+\sqrt5$$ where $A(z)$ is the generating function of the $a_n$. It's easy to see that $A(1/8)$ is the probability of losing, so that the probability of winning is $${3-\sqrt5\over2}$$

Alternatively, Wolfram Alpha gives this value if one substitutes the explicit formula for $a_n$ into the sum.

I've been trying to find a simple derivation for the formula for $a_n$ based on Milo Brandt's elegant solution. If the probability of heads is $p,$ then the same argument as Milo gave shows that the probability of losing is $$-\frac12+\frac{\sqrt{4p-3p^2}}{2p}\tag{1}$$ The series is $$(1-p)\sum_{n=0}^{\infty}a_ny^n, \text{ where } y = p(1-p)^2$$

So far, I haven't been able to manipulate $(1)$ into a an expression in $y$ that will allow me to compute the $a_n.$

NOTE

I have removed the explanation of why I was dissatisfied with my original solution, which leave some of the comments without context. My original solution was much like Ian's but my argument wasn't complete. This points is discussed in the comments to Ian's solution, both by Ian and by Aaron Montgomery.

15
On

Consider the truncated problem, where you win once you get to at least $M$ points. This satisfies the recursion:

$$p_n=\frac{p_{n-1}+p_{n+2}}{2}$$

with the boundary conditions $p_0=0,p_M=p_{M+1}=1$. The general solution of this recursion is $c_1 r_1^n + c_2 r_2^n + c_3 r_3^n$ where $r_1,r_2,r_3$ are the roots of $x^3-2x+1=0$. One of these can be checked to be $1$; after finding that one, it is easy to compute the others as $\frac{-1 \pm \sqrt{5}}{2}$. The boundary conditions imply

$$c_1+c_2+c_3=0 \\ c_1 + c_2 r_2^M + c_3 r_3^M = 1 \\ c_1 + c_2 r_2^{M+1} + c_3 r_3^{M+1} = 1.$$

This system can be solved for finite $M$ and then you can send $M \to \infty$ at the end. This is a bit laborious. If you want a shortcut, we can use some minor heuristics to get there. Set $c_3=0$ (since the only alternative is to have $c_1$ and/or $c_2$ grow without bound). Also assume that $c_2$ grows slower than $r_2^{-M}$, so that the $c_2$ terms in the second and third equations drop out.

Then the $M \to \infty$ equations read $c_3=0,c_1+c_2=0,c_1=1$, so $c_2=-1$. Thus with these heuristics, the general solution to your problem is $p_n=1-\left ( \frac{\sqrt{5}-1}{2} \right )^n$, and the desired result in particular is $p_1=\frac{3-\sqrt{5}}{2}$.

Strictly speaking, this is computing the probability that your score is never $0$ and goes to infinity. It is not totally obvious that the score has to go to infinity in order to play forever without losing. Indeed there are sequences of flips, such as $HTTHTTHTT\dots$, which never hit zero and do not go to infinity.

The reason that this works is that in any finite truncation of this type, the process terminates in finite time with probability $1$. (In other words, these "oscillatory" sequences collectively have probability zero). This can be viewed as a special case of a general result about irreducible Markov chains on a finite state space with an absorbing state: the absorption time is a.s. finite. Alternately, a shortcut for this particular problem is to note that no matter where you start, a run of at least $M/2$ successive heads will win you the game. So in an arbitrarily long (truncated) game in which you never lose you must eventually win. However you choose to see it, the only way to play forever without losing is to surpass all finite thresholds without losing, which is what was calculated above.

0
On

Instead of tossing coins one by one, we can consider tossing coins in rounds. At the start of round 1 you have $1$ point. In each round, you toss as many coins as you had points at the start of that round. Note that this means you can never reach $0$ points part-way through a round, because the length of the round is the quickest you can reach zero. So the original problem is equivalent to determining whether you have $0$ points at the end of any round.

Now if you have $k$ points at the start of a round, your total points at the end of the round will be $k+\sum_{i=1}^kX_i$, where the $X_i$ are independent and take values $+2,-1$ with probability $1/2$ each. Equivalently, this is $\sum_{i=1}^kY_i$, where $Y_i=X_i+1$ are independent taking values $3,0$ with probability $1/2$ each. So this is a Galton-Watson process with offspring distribution given by the $Y_i$. A standard result is that the extinction probability of such a process is the smallest positive root of $f(t)=t$, where $f(t)$ is the probability generating function of the offspring distribution.

Here $f(t)=(1+t^3)/2$, so the extinction probability is the smallest positive root of $t^3-2t+1=0$, which is $\frac{\sqrt{5}-1}{2}$. Thus the probability that your score never reaches $0$ is $1-\frac{\sqrt{5}-1}{2}=\frac{3-\sqrt{5}}{2}$.

6
On

One way to solve this is to let $p_n$ be the probability that, if you start with $n$ points, you eventually zero points. Observe that $p_1$ is also the probability that if you start with $n$ points, you ever have $n-1$ points.

Using this, one can see that $p_n=p_1p_{n-1}$, since the probability of getting from $n$ to zero equals the probability of eventually getting to $n-1$, then the probability of getting from there to $0$. In particular, we get that $p_n=p_1^n$ by using this argument.

Then, observe that $p_n=\frac{p_{n+2}+p_{n-1}}{2}$ by looking at what happens after the first step. Substituting in at $n=1$ gives $p_1=\frac{p_1^3+1}2$ gives that $p_1$ has to be either $1$ or $\frac{-1\pm\sqrt{5}}{2}$. Clearly $p_1$ cannot be $\frac{-1-\sqrt{5}}2$ because that's negative. Thus $p_1$ is either $1$ or $\frac{-1+\sqrt{5}}2$.

To see that $p_1$ is the lesser value, we can define $p_{n,t}$ to be the probability of reaching zero in at most $t$ steps. Note that $p_{n,t+1}=\frac{p_{n-1,t}+p_{n+2,t}}2$ and $p_{n,0}=0$ for $n>0$. Induction on $t$ using this formula quickly shows that $p_{n,t}\leq \left(\frac{-1+\sqrt{5}}2\right)^n$ for all $n$ and $t$. Then, $p_n=\lim_{t\rightarrow\infty}p_{n,t}\leq \left(\frac{-1+\sqrt{5}}2\right)^n$ which implies $p_1=\frac{-1+\sqrt{5}}2$

Finally, the probability you're interested in is $1-p_1$, which equals $\frac{3-\sqrt{5}}2$.