My statistical mechanics textbook uses an approximation to derive a well-known result. The approximation is:
Suppose for an event, the probability of an outcome is P. For n events, the probability of the outcome is n * P, assuming both P is small, and n isn't too large.
This approximation makes "intuitive" sense, but where does it come from mathematically, and how accurate is it?
The actual probability of getting the outcome at least once over $n$ trials can be computed pretty easily: it is one minus the probability that the outcome never occurs. So if the outcome has a probability of $p$ of occurring in an individual trial, then over $n$ trials the probability is $$ 1 - (1 - p)^n $$ Going from here to the approximation in your book is just an application of the binomial theorem. \begin{align*} 1 - (1 - p)^n &= 1 - \sum_{i = 0}^n {n \choose i} (-1)^i p^i \\ &= 1 - \left(1 - np + {n \choose 2}p^2 - {n \choose 3}p^3 + \cdots \right) \\ &= np - {n \choose 2}p^2 + {n \choose 3}p^3 - \cdots \\ &\approx np \end{align*}
This approximation will be most accurate when $p$ is small. For a precise analysis of the error, we can use the Lagrange Remainder: since $np$ is the first degree Taylor approximation to $f(p) = 1 - (1-p)^n$, the remainder term is given by $f''(t) \frac{p^2}{2!}$ for some $0 < t < p$. Then we compute $|f''(t)| = |-n(n-1)(1-t)^{n-2}| < n(n-1)$, so the error is bounded above by $$ \frac{n(n-1)}{2!} p^2. $$