Free throw interview question

9.3k Views Asked by At

I recently had an interview question that posed the following... Suppose you are shooting free throws and each shot has a 60% chance of going in (there are no "learning" or "depreciation" effects, all have the some probability no matter how many shots you take).

Now there are three scenarios where you can win $1000

  1. Make at least 2 out of 3
  2. Make at least 4 out of 6
  3. Make at least 20 out of 30

My initial thought is that each are equally appealing as they all require the same percentage of free throw shots. However when using a binomial calculator (which this process seems to be) the P (X > x) seems to be the highest for scenario 1. Is this due to the number of combinations?

4

There are 4 best solutions below

8
On BEST ANSWER

The result is linked to the Law of large numbers, which basically states that the more trials you do of something, the closer you will get to the expected probability. So after 10,000 trials, I would expect to be proportionally closer to 6000 than being close to 60 after 100 trials.

The point here is that in all these cases the proportion is $\frac23$ - i.e. greater than the expected probability of 60%. Since for larger numbers we will be closer to the expected probability of 60%, that means that we are less likely to be above $\frac23$.

If you were to change the parameters slightly - and ask what is the probability of success in more than 55% of cases, for example, then you'd see the complete opposite happening.

0
On

Precisely speaking, each free throw is a Bernoulli trial with probability of success $p = 0.6$. Hence:

  • the expected value of a single throw is $p$ and the variance is $p(1-p) = 0.24$.
  • the sample proportion of successes out of $n$ trials has expected value $p$, but the variance is $p(1-p)/n$

This means that although the expected proportion of successes stays constant, the variance decreases as a function of $n$.

This intuitively tells us that the probability of observing a sample proportion at least as large as $\frac23$ is a decreasing function of the sample size $n$: as the variance of the sampling distribution gets smaller, more probability mass is concentrating around the mean. Since the mean is less than $\frac23$, it becomes less likely to observe outcomes in excess of $\frac23$.

Conversely, it becomes more likely to observe outcomes in excess of a value that is smaller than the mean.

7
On

$P(\#1) = \binom{3}{2}\cdot{(\frac{60}{100})}^2\cdot{(\frac{40}{100})}^1+\binom{3}{3}\cdot{(\frac{60}{100})}^3\cdot{(\frac{40}{100})}^0 = 0.648$


$P(\#2) = \binom{6}{4}\cdot{(\frac{60}{100})}^4\cdot{(\frac{40}{100})}^2+\binom{6}{5}\cdot{(\frac{60}{100})}^5\cdot{(\frac{40}{100})}^1+\binom{6}{6}\cdot{(\frac{60}{100})}^6\cdot{(\frac{40}{100})}^0 = 0.54432$


$P(\#3) = \sum\limits_{n=20}^{30}\binom{30}{n}\cdot{(\frac{60}{100})}^n\cdot{(\frac{40}{100})}^{30-n} = $ feel free to do the math yourself...


In general, you need to prove that the following sequence is monotonically decreasing:

$A_k = \sum\limits_{n=2k}^{3k}\binom{3k}{n}\cdot{(\frac{60}{100})}^n\cdot{(\frac{40}{100})}^{3k-n}$

I think it should be pretty easy to do so by induction...

0
On

Because your expected number of successes is less than the number required in order to win, you need to get a bit lucky in order to succeed. That's much harder for large samples: if you had to get 2/3 of a thousand throws, you would be in very bad shape; whereas even getting 1/1 out of a single throw is a 60% chance.

So the intuitive explanation is: you lose in the long term, so you need variance to win. The smaller the sample size, the bigger the impact variance has.