While simulating Texas hold'em poker hands, to find the probability that certain two cards would give the best hand after river, I found a pattern that I can't explain.
I made $100,000$ simulations for pair of aces with $n$ players to find the mean probability $p$ to get the best hand after river. To compare the results for different number of players I used the measure $m=p\cdot n$. If $m=1$ the probability is about average to have the best hand and generally $m$ times average probability.
The result of the simulation was:
n m
2 2.30
3 2.70
4 2.97
5 3.14
6 3.24
7 3.30
8 3.29
9 3.26
10 3.20
11 3.13
12 3.04
There seems to be a maximum probability (over average) to have the best hand with pair of aces with $7-8$ players. Why would $m$ be less than maximal with two players?
Let $q$ be the probability that a uniformly randomly drawn pair of cards loses against a pair of aces. If there are $n$ players, there are $n-1$ other players, so under the assumption of independence the probability for the pair of aces to win would be $q^{n-1}$, and thus $m=q^{n-1}n$. This is $0$ for $n=0$ and goes to $0$ for $n\to\infty$, so it must have a maximum in between. To find it, set the derivative with respect to $n$ to $0$, which yields $\log q+\frac1n=0$ and thus $n=-\frac1{\log q}$.
In your case, the maximum is around $7.5$, so we have $q\approx e^{-\frac1{7.5}}\approx87.5\%$. You can see in this graph that this rather primitive model reproduces the data reasonably well. In a proper calculation, $q$ would depend on the community cards, and we would average over the community cards.