I have the following coinflip game:
We flip a coin arbitrary many times(finite).
For every pair of tails, tails twice in a row, A gets a point. While for every pair of tails-heads, tails followed by heads, B gets a point. If it is heads-heads or heads-tails neither get a point. The winner is the player with the most points when they stop.
I have simulated 3 different lengths 47,100 and 1000 flips, each game was simulated 100000 and took the probability of A or B winning. (number of times one won/number of simulations). With tails being 0 and heads 1.
For each length I got that B had a slightly higher chance of winning than A. But I end up with the two questions:
- Why is this so, what is there a proof/probability theoretical explanation?
- When i run my code there is a bigger difference in A's prob. of winning between 47 flips and 100 flips(0.03) then 100 and 1000(0.005). What is the explanation of this(I though high number of simulations should avoid big differences w.r.t. the game length).