Parrondo's Paradox: Show that base games are losing games

415 Views Asked by At

I am currently working through (at least I'm trying to) Parrondos' Paradox. Specifically the following instance:

  • Game 1: unfair coin flip with winning probability $\frac{1}{2} - \varepsilon$, where $\varepsilon = 0.005$.
  • Game 2: If the current winnings are a multiple of 3, we flip a coin with winning probability of $p_1 = \frac{1}{10} - \varepsilon$, else we flip a coin with winning probability $p_2 = \frac{3}{4} - \varepsilon$.
  • Game 3: We randomly choose Game 1 or Game 2, each with probability $\frac{1}{2}$.

What I am struggling with is showing that Game 2 is actually a losing game. As a general setting, I let $X_n^{(i)}$ be the winnings from game $i$ after the $n$-th round (i.e. $X_0=0$ and $X_1$ is either 1 or -1 and so on). The sequence $(X_n^{(i)})_{n\in\mathbb{N}_0}$ is a markov chain with state space $\mathbb{Z}$. First of all, my understanding of game $i$ being a losing game is, that $\lim_{n\to \infty} X_n^{(i)} = -\infty$.

For game 1, I was able to show this, as it simply is a transient random walk on $\mathbb{Z}$. Hence we can model $X_n^{(1)}$ as a sum of iid Bernoulli-variables $Z_n$, where $\mathbb{P}(Z_n = 1) = \frac{1}{2} - \varepsilon$ and $\mathbb{P}(Z_n=-1) = \frac{1}{2} + \varepsilon$. This leads us to $$ X_n^{(1)} = \sum_{k=1}^n Z_n. $$

By the strong law of large numbers, we get $\frac{1}{n}X_n^{(n)}\overset{\text{a.s.}}{\longrightarrow} \mu$, where $\mu = \mathbb{E}(Z_1) = -2\varepsilon$. Thus $X_n^{(1)}$ asympotically equals $n\mu$, and since $\mu<0$, we get $\lim_{n\to\infty}X_n^{(1)} = -\infty$, hence game 1 is a losing game.

Game 2 is currently the part, where I am struggling. I first considered simply establishing the transition matrix for $X^{(2)}$ but I am unsure on how I should go on from that. Then I had the idea of investigating which coin flip will occur more often (i.e. if we use the good coin with $p_2$ or the bad coin with $p_1$). For that, I chose $Y_n = X_n^{(2)} \mod 3$ and got the transition matrix

$$ P_Y = \begin{bmatrix}0 & p_1 & 1-p_1\\1-p_2 & 0 & p_2\\p_2 & 1-p_2&0\end{bmatrix}. $$

Furthermore, we can easily show that $Y$ is ergodic and has the stationary distribution $\pi = (\pi_1,\pi_2,\pi_3)$ which I numerically evaluated as $$ \pi_1 \approx 0.384 \qquad \pi_2 \approx 0.154 \qquad \pi_3 \approx 0.462. $$ The problem I'm having is using this information to show that $\lim_{n\to\infty}X_n^{(2)} = -\infty$, as I can't simply model $X^{(2)}$ as a sum of iid Bernoulli-variables. The only idea I currently have is very vague: $$ \mathbb{P}(X_n^{(2)} - X_{n-1}^{(2)} = 1) \to (\pi_2 + \pi_3)p_2 + \pi_1p_1 = \hat p $$ and use this as a probability for an iid sequence $Z^{(2)}_n$ of Bernoulli-variables and set $\mathbb{P}(Z_n^{(2)}=1) = \hat p$ and $\mathbb{P}(Z_n^{(2)} = -1) = 1-\hat p$ and then set $$ X_n^{(2)} \approx \sum_{k=1}^n Z_k^{(2)}. $$

Edit:

After a bit of consideration, I am certain that my approach from above is correct, since the winning probability converges to $\hat p$, as $$ \mathbb{P}(\text{win 2}=1) = \\ \mathbb P(\text{win 2} | Y_n = 0)\mathbb{P}(Y_n=0) + \mathbb P (\text{win 2} | Y_n = 1)\mathbb{P}(Y_n=1) + \mathbb{P}(\text{win 2} | Y_n=2)\mathbb{P}(Y_n=2)\\ =p_1\mathbb{P}(Y_n=0) + p_2(\mathbb{P}(Y_n=1) + \mathbb{P}(Y_n=2)) $$ by the law of total probability. From our previous observations we get $\mathbb P(Y_n=0) \to \pi_1$, $\mathbb P(Y_n=1) \to \pi_2$ and $\mathbb P(Y_n=2) \to\pi_3$, hence $$ \mathbb{P}(\text{win 2}) \to \hat p. $$

That means that $X^{(2)}$ essentially behaves like a random walk with $p=\hat p$ (where $p$ denotes the probability of the iid Bernoulli-variables being 1, analogous to game 1). Since $\hat p < \frac{1}{2}$ (which we can verify numerically), we get $\lim_{n\to\infty} X_n^{(2)} = -\infty$.

The Paradox: The source which stated the problem (which is my university professor), claims that the game 3 I proposed is a winning game, meaning that the (asymptotic) winning probability is greater than $\frac{1}{2}$. This is very much not intuitive to me (probably the reason why it is called a paradox), as my line of thought currently is

$$ \mathbb{P}(\text{win 3}) = \mathbb{P}(\text{win 3}|\text{play 1}) \mathbb{P}(\text{play 1}) + \mathbb{P}(\text{win 3}|\text{play 2}) \mathbb{P}(\text{play 2}), $$ i.e. another application of the law of total probability. With this, I get $$ \mathbb{P}(\text{win 3}|\text{play 1}) = \mathbb{P}(\text{win 1})\\ \mathbb{P}(\text{win 3}|\text{play 2}) = \mathbb{P}(\text{win 2}), $$ resulting in $$ \mathbb{P}(\text{win 3}) = \frac{1}{2}(\mathbb{P}(\text{win 1}) + \mathbb{P}(\text{win 2})) \to \frac{1}{2}\left(\frac{1}{2} - \varepsilon + \hat p\right)<\frac{1}{2}. $$

I have a slight hunch, that the additional wins/losses from game 1 mess with the stationary distribution of $X^{(2)}$, but I am currently unsure how to show this.

Edit 2: Since I asked something similar after @joriki already gave an exceptional answer to my original question, I want to quickly outline my final solution. Similarly to game 2, we consider the markov chain $Y_n = X_n^{(3)} \mod 3$, determine the transition probabilities with the law of total probability, compute the stationary distribution of $Y$ and can thus argue that asymptotically, the expected winnings at game $n$, denoted $\mathbb{E}(X_n^{(3)})$, are positive.

2

There are 2 best solutions below

1
On BEST ANSWER

We can actually do with even less effort than in lulu’s (otherwise perfect) solution, and without any help from our electronic friends. Let’s first analyze the game without the $\epsilon$ shift in the probabilities. If we’re at $1$, the probability that we reach $0$ before reaching $3$ is

$$ \frac14+\frac34\cdot\frac14\cdot\frac14+\cdots=\frac14\sum_{k=0}^\infty\frac3{16}=\frac4{13}\;. $$

Thus, if we’re at $2$ the probability that we reach $0$ before reaching $3$ is $\frac14\cdot\frac4{13}=\frac1{13}$.

That means that if we start at $0$, the next multiple of $3$ we reach will be $-3$ with probability $\frac9{10}\cdot\frac1{13}=\frac9{130}$ and $+3$ with probability $\frac1{10}\cdot\left(1-\frac4{13}\right)=\frac9{130}$ (and with the remaining probability it will be $0$ again). So the random walk on the multiples of $3$ is fair without the $\epsilon$ shift. You can either argue that shifting the probabilities to your disadvantage will make it unfair, or you can repeat the calculation with the slightly more cumbersome numbers.

2
On

Game $\#2$ is "periodic" in the sense that it restarts at each multiple of $3$. Thus it suffices to analyze the version in which you stop when your score is $\pm 3$.

For $n\in \{0,\pm 1, \pm 2\}$ let $E[n]$ denote the expected value of this game when you have $n$. We have the equations:

$$E[2]=p_2\times 3+(1-p_2)\times E[1]$$ $$E[1]=p_2\times E[2]+(1-p_2)\times E[0]$$ $$E[0]=p_1\times E[1]+(1-p_1)\times E[-1]$$ $$E[-1]=p_2\times E[0]+(1-p_2)\times E[-2]$$ $$E[-2]=p_2\times E[-1]+(1-p_2)\times (-3)$$

We only care about $E[0]$ and that works out to be $$\boxed {E[0] = -\frac {73443}{446300}\sim -.16456}$$

here is the Wolfram alpha solver. It disliked my variable names, so I used $A$ for $E[0]$.