Probability addition rule over 100 percent?

2.3k Views Asked by At

In Texas holdem, one is dealt a Decent Hand (Any pocket pair or any two broadway cards) ~15 percent of the time. If there are three people left in the hand, I can use the probability addition rule to say at least one of those three people left will show up with a Decent Hand ~45 percent of the time, correct? What about when there are 10 people left? Does someone show up with a Decent Hand ~150% of the time?

Edit: Why does the addition rule not suited (hehehe) for this case?

3

There are 3 best solutions below

0
On BEST ANSWER

The addition rule of probability states that for two events, the measure of their onion is equal to the sum of the measures of the events minus the measure of their intersection: $$\mathsf P(A\cup B) = \mathsf P(A)+\mathsf P(B)-\mathsf P(A\cap B)$$

When these events are mutually exclusive, then $\mathsf P(A\cap B)=0$, so : $$\mathsf P(A\cup B) = \mathsf P(A)+\mathsf P(B)$$

But only when this is so.   The events of each of several people receive decent hands from the same deal of a deck are not mutually exclusive.   More than one such person may be dealt a decent hand at the same time.

4
On

No, you cannot. The quality of their hand influences the likelihood of them staying in the hand and even if we ignore this, "the probability addition rule" means something very different. If you deal a hand to three people, the chance that at least one of them has a "decent hand" (ignoring that this is not independent) is $1 - 0.85^3$.

edit: Previously I wrote incorrectly 0.15 instead of 0.85

1
On

First of all, the probability of being dealt a pair or Broadway cards (T-A) is about 17.95%. There are 20 Broadway cards, so ${20 \choose 2} = 190$ ways to choose them, plus $8{4 \choose 2} = 8*6 = 48$ pairs 2-9 for a total of 238 hands out of a total of ${52 \choose 2}=52*51/2=1326$ hands.

As to your main question, adding probabilities that way is only exact for mutually exclusive events. That means events where only one can occur at a time. For example, players being dealt the ace of spades are mutually exclusive events. Only 1 player can have the ace of spades, so the probability is 1/52 for a particular player, and 10/52 for 10 players. When you hold a pair of aces, only a single player can have a pair of aces with you, so you can get the probability that one of your 9 opponents has aces by simply multiplying the probability that a particular player has it, which is 1/1225, by 9 to get 9/1225. A less obvious example is someone flopping a set of aces when there is an ace on the flop and you don't have one. Even though there are 3 aces remaining, only 1 player can have a pair of them (aces that is), so to get the probability that one of your 9 opponents has it, you are entitled to multiply the probability for 1 player of $3/{47 \choose 2}$ = 3/1128 by 9 to get 27/1128. That's not realistic though because it assumes everyone is seeing the flop with random cards. The actual probability will be much higher. However, if everyone's range contains the same number of hands, you can still use this multiply by 9 method on whatever the probability is per player.

When more than one player can have the hand in question, this method will not be correct, though when the probability is small, it can still be a good approximation. Other times, as in your case, it can be terrible. The problem is that you are double counting the cases where 2 people have the hand, triple counting the cases where 3 have it, quadruple counting the cases where 4 have it, etc. In those cases you can use what is called the inclusion-exclusion principle to correct all this over counting to make the answer exact or as accurate as you care to. Sometimes this isn't difficult, and other times it can be messy. In your example, it would be messy. However, very often in hold'em you can avoid that method and get a decent approximation by assuming that the hands in question are independent. Don't confuse independence with mutually exclusive (many people seem to do this). Mutually exclusive events cannot be independent (except in a trivial case where the probability of one of them is 0).

If events are independent, it means that the occurrence of one event does not affect the probability that the other occurs. Note that this certainly isn't the case for mutually exclusive events where one event makes the probability that the other occurs zero. You don't have independent events in your example either because of card removal effects. When one player gets dealt one of these hands, it changes the probability that another player will also be dealt one of them. When someone gets dealt Broadway cards, there are fewer Broadway cards for someone else to be dealt, but there are also fewer total cards left. Overall, the probability that the next player will also get Broadway cards or a pair drops from 17.95% to $[{18 \choose 2} + 8{4 \choose 2}]/{50 \choose 2}$ or about 16.4%. When someone is dealt a pair 2-9, the next person actually has a higher chance of being dealt a decent hand. It increases to $[{20 \choose 2} + 7{4 \choose 2}+1]/{52 \choose 2}$ or about 19.0%. In this case, the fact that there are are now only 50 cards made a bigger difference than the fact that there 5 fewer pairs (removing a pair removes 5 pairs of that rank). With the Broadway cards, it was the opposite. The reduction in the number of Broadway cards was more important, and the probability decreased.

In your case, even though your hands are not independent, pretending that they are will at least give a much better approximation than pretending you have mutually exclusive events. If you use the correct value of 17.95% for 1 player, simply multiplying by 3 as you did would give about 53.8% for 3 players, and (lol) 179.5% for 10 players. Instead, to pretend that 3 players are independent, you would compute $$1 - (1 - 0.1795)^{3} \approx 44.8\%$$ and for 10 players $$1 - (1 - 0.1795)^{10} \approx 86.2\%.$$ That is, you raise the probability that a single player does not have one of these hands (1-0.1795) to the power of the number of players to approximate the probability that none of the players have it. This would be exact if the hands were independent which they are not. Then subtract that from 1 for the probability that at least 1 player has one of these hands. The exact answers are actually about 45.2% and 88.0% so quite good for 3 players, and off a bit for 10, but still in the ballpark. It works better when the probability for 1 player is smaller. IME, it is often very effective for hold'em probabilities even for 10 players, and it's only off as much as it is in your case because you are considering such a large number of hands. Now you reported 45% for the first one too, but remember that was bogus because it was based on 15% for 1 player.

BTW, how do I know the exact answers are about 45.2% and 88.0%? I wrote a quick simulation in R which isn't hard to do as shown below. You can change the number of players on the first line, and it outputs the 99.9% confidence interval, so you can be 99.9% confident that the answer falls between those 2 error values. The simulation runs until you break it. The longer you run it, the tighter the confidence interval, but in this case 30 seconds should be plenty.

players = 10   # Input number of players

deck = rep(2:14,4)
sims = 0
count = 0
while(1) {
  hands = sample(deck,2*players,replace=FALSE)
  first.cards = hands[1:players]
  second.cards = hands[(players+1):(2*players)]
  if ( any(first.cards == second.cards) || 
       any(first.cards >= 10 & second.cards>= 10) 
     ) count = count + 1
  sims = sims + 1
}
p = count/sims
error = 3.29*sqrt(p*(1-p))/sims
p
p-error
p+error
sims

Output for 10 players:

> p
[1] 0.8798065
> p-error
[1] 0.879805
> p+error
[1] 0.879808
> sims
[1] 698532