Where is my mistake in calculating stopping time?

77 Views Asked by At

A gambler has $w$ dollars and in each game he loses or wins a dollar with equal $p=1/2$ probability. I want to calculate the average number of games that a gambler can play before he runs out of money. Let's denote this $\tau(w)$.

(Yes, I know this is a classic problem, but I'm trying to figure it out on my own using basic probability theory before reading up on it in detail. It is not for a class.)

I am getting a nonsense result and my question is: where is my mistake?

Here's my reasoning:

After a single game, with probability $1/2$ he'll end up with $w+1$ dollars and with probability $1/2$ he'll have $w-1$. Thus after a single game, on average he'll have $\tau(w+1)$ more games to play with probability $1/2$ and $\tau(w-1)$ games with probability $1/2$. So

$$\tau(w) = \frac{1}{2}\bigl(\tau(w-1)+1\bigr) + \frac{1}{2}\bigl(\tau(w+1) + 1\bigr).$$

This is valid for $w \ge 1$. $\tau(0)$ is obviously $0$.

We can solve this recurrence equation and get

$$\tau(w) = w \cdot (\tau(1)+1) - w^2$$

(can be verified by backsubstitution).

But this result cannot be correct because for large enough $w$ it will be negative due to the $-w^2$ term. So where did I go wrong?

2

There are 2 best solutions below

1
On BEST ANSWER

The issue is that $\tau(w)$ is in fact infinite. To see this, consider the following modification: the gambler starts with $w$ dollars and plays until he runs out of money or accumulates $a$ dollars, where $a\geq w$. Let $\tau_a(w)$ be the expected length of this game. Then the recurrence relation for $\tau_a$ is $$ \tau_a(w)=\frac{1}{2}\tau_a(w+1)+\frac{1}{2}\tau_a(w-1)+1$$ for $1\leq w\leq a-1$, with the two boundary conditions $\tau_a(0)=0$, $\tau_a(a)=0$.

The solution to this recurrence relation is $\tau_a(w)=w(a-w)$, and taking $a\to\infty$ shows that $\tau(w)$ is infinite for the game you considered.

Another way to see that $\tau(w)$ is infinite is as follows: Let $X_n$ be the gambler's capital at time $n$. Then $X_0=w$ and $\{X_n\}$ is a martingale (with respect to the natural filtration). Let $T=T(w)$ be the first time the gambler reaches zero, so $\tau(w)=\mathbb{E}[T(w)]$. If $\tau(w)$ were finite, then we could use the Optional Stopping Theorem to obtain $$ 0=\mathbb{E}[X_T]=\mathbb{E}[X_0]=w $$ which (provided that $w>0$) is a contradiction.

2
On

Let see if we can explicitly calculate $\tau (1)$.
We can in $1$ step end in $0$, or in $2n+1$ steps end up in zero. In those $2n+1$ steps, we must go up $n$ times and go down $n$ times, such that summing up and down, we do not go to 1 down. This is a Dyck path from $(0,0)$ to $(n,n)$, of which there are $C_n = \frac{1}{n+1} \binom{2n}{n}$.
As each step has equal probability, the probability to end up in $0$ in $2n+1$ steps is $$D_n =\frac{1}{2^{2n+1}} C_n = \frac{1}{2^{2n+1}} \frac{1}{n+1} \frac{(2n)!}{n!n!}.$$ So we have $\tau(1) =\frac{1}{2} + \sum_{n=1}^\infty (2n+1)D_n = \infty$.
So we have that $\tau(w) = w ( \tau(1) +1 ) - w^2$ does not make sense, except when we take $\tau(w) = \infty$ for all $w >0$.