'Philosophical' way of looking at a Bernoulli Trial

63 Views Asked by At

I know everything that will be said in the following lines is wrong. Wrong in every way. But let's cut to the chase. I've been thinking about this while playing some game online. I'm not asking for a mathematical proof countering the following, but rather asking for people to share their opinion on this (hence 'philosophical').

Imagine a game where you have a p = 1/x chance of success. Obviously, you have a (x-1)/x chance of failing. Alright, here is the only rule I want to stress out, I will play until I win once (no matter how many tries I must do), then I will stop.

Now, let's say I have a 1/100 chance of winning. My point is, obviously, mathematically and intuitively, it is very easy to accept and understand that these are independent tries with equal probabilities (i.e. I have 1/100 chance of success on my next try). BUT, one could argue that every time you play and fail, you are one step closer to winning (Gambler's Fallacy everyone?). It is not true mathematically because there is one state of the world where you will always lose. But it is easy to 'accept' that I will win once with such p (p = 0.01) with A LOT of tries. If you aren't convinced, add a trillion tries, if you aren't convinced yet, add infinite tries. If you aren't convinced, let's say my friend can see the future and knows I will win for the first time after 152 tries. As I play the game along and lose, I'm slowly creeping toward my winning try (N = 152), therefore, every time I play, I'm converging towards success.

Looking at this problem this way, I 'feel' my chances of having a success are increasing (or at least the try where I will hit the success rate is approaching) as n -> infinity even though p = 0.01 for all n. With a much higher p (i.e. flipping a coin; p = 0.5), it is easy to 'verify' empirically. You wouldn't have enough time in your lifetime to continually 'lose', so accepting my only rule would realistically be easy to do.

Thoughts?

1

There are 1 best solutions below

0
On

Here is what a mathematician or statistician might do with this philosophical speculation of yours.

Given that the probability of success of the game is $p$, one can work out the probability of the following events, using the ordinary laws of probability, including the hypothesis that each subsequent trial is independent. Here's what the result would be:

  1. The probability that the first success occurs on game $1$ is $p$.
  2. The probability that the first success occurs on game $2$ is $(1-p) \times p$.
  3. The probability that the first success occurs in game $3$ is $(1-p) \times (1-p) \times p = (1-p)^2p$.

. . .

$n$. The probability that the first success occurs on game $n$ is $(1-p)^{n-1}p$.

. . .

Okay so far. Now let's test it. We'll do a new experiment. Each trial of the new experiment is like this:

  • Repeat the game until the first success, and record how many games it took, call that number $R$.

Now, let's repeat this new experiment a lot of times. Record the frequency distribution, focussing maybe just on the first few values of $R$: the frequencies of $R=1$, of $R=2$, and of $R=3$. You could work out with statistics how many repetitions you should use in order to accurately test the hypothesis of independence, and that number would depend on the value of $p$. But the idea is: do a zillion repetitions.

Now, compare your frequency values with the actual probabilities: $P(R=1)=p$; $P(R=2)=(1-p) \times p$; and $P(R=3)=(1-p)^2 \times p$.

Armed with that comparison, you will now be in a better position to evaluate your own philosophical speculations about the independence hypothesis and the Gambler's Fallacy.