Though I can't provide a concrete example, in the history of science there must have been numerous cases where scientists built complicated probabilistic models, for which the knew the probabilities of some simple events, but for which it was very difficult to formally compute the probabilities of various other, more complicated events that were composed of these simple ones (e.g. take a graph with particles on it, that jump at times $t=1,2,3,\ldots$ with some probability $p\in(0,1)$ to any adjacent node and merge to a new particle of two particles meet; these are very simple starting probabilities, but it can become a research-level problem to compute probabilities of events like the time until only one sole particle randomly jumps around on the graph).
In that case, by doing simulations, it is easy to estimate the probabilities. How did scientists estimated these probabilities by simulation, if they did not have a computer or books with look-up tables of random numbers that quickly supplied them with a list of (uniformly) random number (from which almost any other distribution can then be constructed)? Many very sophisticated ways of generating pseudo-random numbers have recently been devised, but I'm interested what people did in, e.g., 1800, when such methods were not yet available?
Did they devise some clever ad-hoc physical experiment and hoped that the numbers obtained that were sufficiently random, in order to be able to carry out their simulation? If so, what experiment was that?