Combinatorics problems that can be solved more easily using probability

613 Views Asked by At

I'm looking for examples of combinatorics problems which would be very difficult to solve by direct enumeration, but can be easily solved using ideas from probability, like independence, commuting sums and expectations, etc. I know I have seen such problems before, especially in the HMMT combinatorics subject tests, but I can't now recall any good examples.

I am NOT looking for probabilistic existence proofs (the so-called "probabilistic method" introduced by Erdos). The sort of problems I'm interested in are enumerative.

3

There are 3 best solutions below

0
On

Graph enumeration problems are often solved using the theory of random graphs. For example, if you want to asymptotically count the number of (labeled) connected graphs on $n$ vertices, it is easiest to estimate the probability that the Erdős–Rényi graph $\mathcal G_{n,1/2}$ is connected, then multiply by $2^{\binom n2}$, the total number of labeled graphs on $n$ vertices.

For a fancier example, the asymptotic number of labeled $d$-regular $n$-vertex graphs can be counted by a similar probabilistic strategy. If $d$ is sufficiently small as $n \to \infty$, we analyze the configuration model (which generates a random $d$-regular multigraph by picking a random matching between the $dn$ half-edges) and estimate the probability that the graph is simple. For an example of the newest and fanciest results in this vein, see this paper by Liebenau and Wormald.

The fundamental ideas in such cases are often about proving asymptotic independence between many very unlikely events, which lets us make Poisson or Gaussian approximations to random variables that approach the truth as $n \to \infty$. This would be difficult to recast in a purely deterministic form, and the result would be very hard to understand.

By contrast, in a problem where the "trick" was independence or commuting sums and expectations, the probability is essentially flavor and you could recast such a proof as taking a sum in two ways or something similar.

0
On

This might be a good example. Prove that for any positive integers $m,n$ with $m< n$, $$ \sum_{k=0}^n \frac{\binom{m}{k}}{\binom{n}k}=\frac{n+1}{n-m+1}. $$

Solution: Imagine you have a deck of $n$ cards, $m$ of which are black, the rest of which are red. You shuffle the deck, and deal from the top until you deal a red card. What is the expected number of cards dealt?

Letting $X$ be the number of cards dealt, you have $P(X>k)=\binom{m}{k}/\binom{n}{k}$, since $X>k$ occurs if and only if the first $k$ cards are all black. Therefore,

$$ E[X]=\sum_{k=0}^nP(X>k)=\sum_{k=0}^n \frac{\binom{m}{k}}{\binom{n}k} $$ On the other hand, the $n-m$ red cards divide the deck into $n-m+1$ sections of black cards. Each black card is equally likely to fall in each section. In particular, there is a probability of $1/(n-m+1)$ of it falling in the top section. By linearity of expectation, the expected number of black cards in the top section is $m/(n-m+1)$. Therefore, the expected number of cards dealt is $$ 1+\frac{m}{n-m+1}=\frac{n+1}{n-m+1} $$


I think there are a lot of complicated combinatorial sums which can be made easy using linearity of expectation. For example, $$ \sum_{k=0}^nk\binom{n}kp^k(1-p)^{n-k}=np $$ is best understood by realizing the sum is the expected value of a binomial random variable with parameters $n$ and $p$, whose value is obviously $np$ by linearity of expectation. There, the sum is not hard, but there are probability examples in a similar vein.

0
On