Calculating required sample size given X outcomes (empirical measurements)

8 Views Asked by At

I am doing some testing on a game to try and establish whether the outcomes from various mini-game mechanics match their advertised odds.

Here are a few examples

  1. Rolling two dice (presumed 'fair' dice)
    36 total outcomes with equal distribution, 11 equivalent outcomes
  2. Spinning a 10-segment wheel (presumed fair, i.e. odds of each segment being 1/10)
    10 outcomes with equal distribution
  3. Drawing a card with 1 of 3 outcomes (odds of each being: 3/6, 2/6 and 1/6 respectively)
    3 outcomes with un-even distribution, lowest common denominator of 1/6th

How many samples are 'enough'

I am trying to understand how many samples I need to gather for each mechanic (and some others not mentioned) in order to say the results are within "X% margin for error"

Please can someone advise if there is a formula something along the lines of...
O [number of outcomes] ÷ N [sample size] * X [some value of equation] = margin for error

Any help would be greatly appreciated!


Research attempted / knowledge used

  • I did stats in school but that was admittedly many years ago now. I know there are terms like 'margin for error' and 'confidence interval' - I think the former is what I'm after(?)
  • As part of my googling I read about p- and z- scores but don't really understand these or how to calculate them
  • I looked through: Calculating Margin Of Error but this was way above my level of understanding
  • I looked through: Correlation between margin of error and sample size, but this (and other pages I read) talked about a mean value - however that doesn't apply to my data/intended calculation (AFAIK).