Calculating p from a set of assumed probabilities and their actual outcomes

33 Views Asked by At

I am playing a computer game where many actions have a set chance to succeed communicated to the player as a percent chance of success.

I have began to suspect the the displayed chances do not reflect the real probability of success in the game (much like you might suspect that a dice or coin is weighted). In order to prove this systematically I have written out a table where each entry shows the game's displayed success chance for a given action and then the actual result (succeed or fail) of the action as I play.

   % Chance of Success |  Result |
                   50% | succeed |
                   60% |    fail |
                   20% |    fail |
                   80% | succeed |
                   ... |     ... |

A simple analysis of the data I've collected reveals that the actual chance of success is far lower than the displayed one, but I'm concerned that my sample size is too small to be statistically significant.

So, framing this as an experiment where the null hypothesis is that "the game's displayed success values represent the true probability of success" and my hypothesis is that they don't, how can I go about calculating how statistically significant (ideally a p value) my results are?

1

There are 1 best solutions below

0
On

You might do a "test of value of population mean", also known as a "one sample t-test". First calculate the average number of successes you should experience based on the site's stated probability of success; just sum up the "chance of success" in your table and divide by the number of rows. That is the assumed mean of the population. Then compare the assumed mean with the average number of successes you actually experience, which is the sample mean. The t-test will tell you whether your result is statistically significant or not.

You can find the formula for the t-test in many places, either in a statistics textbook or online.