Binomial distribution with variable success probability.

51 Views Asked by At

For context, I was playing a videogame, and in said videogame the following (similar) scenario can occur:

You're betting against a dealer on a specific face landing (let's say the face with a number $1$) when throwing an $n$-sided dice. At the begging of each throw-attempt, the dealer hands you a single fair $n$-sided dice which you throw to see the outcome: If the dice lands on $1$ you win, anything else you lose. However, there's a twist. For you to not quit while on a losing streak, the dealer will sometimes hand you a loaded dice instead of the fair one. The loaded $n$-dice has a probability $p > \frac{1}{n}$ of landing on $1$. This way, the dealer occasionally will increase your odds of winning. The loaded and fair dice are indistinguishable between themselves, and the dice-switching is done randomly by the dealer such that on any turn there's a probability $q$ that the dice is loaded instead of fair.

I'm interested in calculating the probability of getting $k$ success after $m$ dice throws.


As the title implies, the above game reminded me of a Binomial distribution, where the probability of success isn't fixed, but rather variable. Here's my attempt to find the probability.

If there were no switching with a loaded dice, then the standard Binomial density function would give the probability $$ P(k) = \binom{m}{k}\underbrace{\frac{1}{n^k}}_{k\text{ successes}}\underbrace{\left(1 - \frac{1}{n} \right)^{m-k} }_{m-k \text{ fails}} $$ of getting $k$ successes in $m$ trials. Now, since any turn there's probability $q$ that the dice is loaded, then there will be approximately $mq$ attempts where the probability is $p$ instead of $\frac{1}{n}$. Thus, I believe we get that the probability is something like this $$ P(k) =\sum_{k_1+k_2=k} \left[\binom{m-mq}{k_1}\frac{1}{n^{k_1}}\left(1 - \frac{1}{n} \right)^{m-mq-k_1} + \binom{mq}{k_2}p^{k_2}(1-p)^{mq-k_2}\right] $$ where I essentially tried to add the loaded part to the fair part and combine them by adding them up.


I'm not sure if this is the correct way to handle this sort of problem. Could anyone tell me if I'm on the right track? And in case I'm not, does anyone know how the correct solution to this problem can be obtained? Thank you!

1

There are 1 best solutions below

1
On BEST ANSWER

Why go with approximation when you can compute those probabilities straight away. I guess you know what a conditional probability is, and so you should know that law of total probability: $$ \begin{align} \Bbb P(\text{dice lands }1\text{ on }k\text{-th throw}) &= \frac1n\cdot \Bbb P(\text{dice is fair at }k\text{-th throw}) + p\cdot\Bbb P(\text{dice is rigged at }k\text{-th throw}) \\ & = \frac1n(1-q) + pq. \end{align} $$ So from the perspective of the gambler who only knows that at each throw he'll get a rigged dice with probability $q$ (independently of the previous results), that in such case dice will land on $1$ with probability $p$, and otherwise they get a fair dice, their perception of the success probability is exactly $r = \frac1n(1-q) + pq$. So the random variable you're interested in is $\xi_m\sim \mathrm{Binomial}(r, m)$, for which you know how to get $\Bbb P(\xi_m = k)$. You can actually go ahead and compare that with the estimate you've got for different numbers of $m$ and $k$.