Examples where we calculate combinations/permutations using probability

60 Views Asked by At

The probability of an event $A$ is given by: $$P(A) = \frac{n(A)}{n}$$ Usually, $n$ is easy to find out, and the only hurdle in finding $P(A)$ is $n(A)$.

However, suppose we are interested in finding $n(A)$. We know $n$ and can find $P(A)$ using some other method (not the formula above), then we can find $$n(A) = nP(A)$$

I'm looking for examples of such counting problems, which can be solved by probability.

2

There are 2 best solutions below

0
On BEST ANSWER

There is a related notion to your idea, probabilistic proofs of combinatorial identities. Suppose you have mutually exclusive and exhaustive events $E_1,\dots,E_m$, which means that $$ P(E_1)+\dots+P(E_m)=1\tag1 $$ Multiplying both sides by $n$, you get $$ n(E_1)+\dots+n(E_m)=n,\tag2 $$ an identity where all numbers involved are integers. Often, for integral identities where all the terms involved have some combinatorial meaning, it is desirable to give a bijective proof, meaning you find two sets describing the two sides of the equation, and give a bijection between the two sets. I will now give an example where $(1)$ is easy to prove, but $(2)$ is hard to give a bijective proof for. In my mind, probabilistic proofs are the next best thing to bijective proofs.

Imagine an urn which initially contains a single red marble and a single blue marble. You make a series of $n$ draws from this urn, where after each draw, you put back the marble you drew, plus two additional marbles of the same color. Let, $E_k$ be the event that you drew exactly $k$ marbles over the course of these $n$ draws. In my answer, I show that $$ P(E_k)=\frac{\binom{2k}k\binom{2(n-k)}{n-k}}{4^n},\qquad k\in \{0,1,\dots,n\} $$ which leads to the identity $$ \sum_{k=0}^n \binom{2k}k\binom{2(n-k)}{n-k}=4^n\tag3 $$ I cannot really claim that $(3)$ is hard to prove. You can automatically prove $(3)$ with a computer, using the methods of the book A=B by Petkovsek, Wilf and Zeilberger. Also, there is a pretty straightforward generating function proof of $(3)$. But these proofs lack the intuitiveness provided by a bijective proof, and proving $(3)$ bijectively is famously hard.

0
On

The method you cite requires the elements in the outcome set to all have identical probability mass (ie: non-biased weighting) [and, of course, a finite count].

If you do have that, then you can indeed use basic algebra to get $\eta(A)=n~\mathsf P(A)$ .


I cannot think of a case where the finding the count is too much harder than finding the probability. In such a setting, if you can find one you could find the other in much the same way. They just differ by a constant factor, after all.