Can the variance of betting on sports be simulated accurately with RNG?
Take an example where a punter is presented with an event to bet on which has a 15% chance of winning, but are given decimal odds of 10.0 on it (9/1 on their money)
If we built an RNG simulation to simulate this bet with those chances of it happening, would this accurately represent the variance one might expect in the long term with this bet, or would the the variance of the sports bet actually be lower than that?
If you have repeated independent trials with success probability $p = 0.15$ then the random variable $X \sim \mathsf{Binom}(n, p)$ models the number of successes in $n$ trials. In that case $Var(X) = np(1-p)$. If you want to know the gain or loss from betting at specific odds, that can be expressed in terms of $X$ and the variance of the amount won or lost can be found analytically.
Of course, this can be simulated, if this is an simulation assignment. But I don't see the point simulating something that is so easy to derive analytically. Results from many simulated trials would closely estimate the variance. Suppose $n = 10.$ Then $E(X) = np = 1.5,\,$ $Var(X) = np(1-p) = 1.275$ and $SD(X) = \sqrt{np(1-p)} = 1.129.$
A simulation in R statistical software of a million 10-bet sessions is shown below. With a million iterations, typically first three (often four) significant digits will be accurate for $E(X)$ and $SD(X).$ (The variance is in squared units and so may have a somewhat larger simulation error in terms of significant digits.)
The histogram below shows the simulation results, red dots show exact binomial probabilities.