Is $\lim_{p\rightarrow 0} \frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]} = \infty$?

48 Views Asked by At

I am interested in

$$\lim_{p\rightarrow 0} \frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}.$$

Intuitively, it seems that the limit would be $\infty$, with $P[Bi(n,p) = 1]$ converging to zero much slower than $P[Bi(n,p) \geq 2]$. This seems consistent with graphing the limit, https://www.desmos.com/calculator/enxdmyyry5.

The limit can, of course, be re-written as

$$\lim_{p\rightarrow 0} \frac{n p (1-p)^{n-1}}{1 - n p (1-p)^{n-1} - (1-p)^{n} }.$$

I naturally tried using L'Hopital rule but could not get much luck or traction by playing with the ratio of derivatives, although graphing the ratio again suggests that the limit is $\infty$, https://www.desmos.com/calculator/ee4m0juiy2.

Another thing I tried is finding a function that I can show tends to $\infty$ and that bounds $\frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}$ below. I somewhat naturally tried

$$\frac{n p (1-p)^{n-1}}{1 - (1-p)^{n} }$$

and

$$\frac{n p (1-p)^{n-1}}{1 - n p (1-p)^{n-1}}.$$

Unfortunately, although both expression trivially bound $\frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}$ below, neither tends to $\infty$, https://www.desmos.com/calculator/ee4m0juiy2.

I would appreciate any help or hints on how to prove or disprove that the limit is indeed $\infty$.

2

There are 2 best solutions below

0
On

Ok, I now seem to be getting somewhere by combining the two approaches I tried, i.e., applying l'Hopital but to a lower-bound for $\frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}$ instead of $\frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}$ itself.

Error checking still very much appreciated.

Step 1: A lower-bound for $\frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}$

To find a lower-bound for $\frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]}$ that (a) also seems to tend to $\infty$, and (b) enables simplifying the denominator, I again played with desmos, https://www.desmos.com/calculator/lmwvbacjxd.

This led me to

$$ \frac{P[Bi(n,p) = 1]}{P[Bi(n,p) \geq 2]} = \frac{n p (1-p)^{n-1}}{1 - n p (1-p)^{n-1} - (1-p)^{n} } \geq \frac{n p (1-p)^{n}}{1 - n p (1-p)^{n} - (1-p)^{n} }.$$

Factoring the denominator of the last expression yields,

$$\frac{n p (1-p)^{n}}{ 1 - [ (1-p)^{n} * ( np + 1) ]},$$

which will prove to provide more useful derivatives than the original denominator ${1 - n p (1-p)^{n-1} - (1-p)^{n}}$ (given the goal of eventually canceling things out between the differentiated numerator and denominator).

Step 2: Using L'Hopital to show that $\lim_{p\rightarrow 0} \frac{n p (1-p)^{n}}{ 1 - [ (1-p)^{n} * ( np + 1) ]} = \infty$

We have

$$ \frac{(\partial/\partial p) ~ n p (1-p)^{n}}{(\partial/\partial p) ~ \{1 - [ (1-p)^{n} * ( np + 1) ]\}}$$

$$ = \frac{n*(1-p)^{n} - np * n(1-p)^{n-1}}{- [ - n(1-p)^{n-1} * (np + 1) + (1-p)^{n} * n ]}$$

$$ = \frac{n*(1-p)^{n} - np * n(1-p)^{n-1}}{ n(1-p)^{n-1} * (n-1)p}$$

$$ = \frac{(1-p)}{(n-1)p} - \frac{n}{(n-1)} ~~ \xrightarrow{p \rightarrow 0} ~~ \infty$$

1
On

Using Newton's binomial formula, you can get an estimate of the numerator and, most importantly, of the denominator, as $p$ tends to $0$.

The main idea is that $p^k$ is negligible compared to $p^l$ if $k > l$. I will use the notation $O(p^k)$ to denote : "some term of error that is comparable to or smaller than $p^k$ when $p$ tends to $0$"

Here, we can use the exact formula : $(1-x)^k = 1 - kx + \frac{k(k-1)}{2}x^2-\frac{k(k-1)(k-2)}{6}x^3+....+(-1)^k x^k$. This formula can be obtained by expanding the product $(1-x)(1-x)(1-x) ... (1-x)$ and counting the number of terms with an $x^i$ for $i=0,...,k$.

However, what we really use here is the estimate : $(1-x)^k = 1 - kx + \frac{k(k-1)}{2}x^2 + O(x^3)$ as $x$ tends to $0$.

So, the denominator is equal to $$1-np(1-p)^{n-1}-(1-p)^n $$

$$= 1- np[1-(n-1)p+O(p^2)] - (1-np + \frac{n(n-1)}{2}p^2 + O(p^3)$$

$$ = \frac{n(n-1)}{2}p^2 + O(p^3)$$

So, you should be able to check that $\frac{P(Bi(n,p)=1)}{P(Bi(n,p)\geq 2)} \times p \rightarrow \frac{2}{n-1}$ as $p$ tends to $0$. (That is, unless my computations are wrong)

If you like formal stuff, you can define $f(x) = O(g(x))$ as $x$ tends to $0$ by saying : there exists a positive constant $C$ such that, for all sufficiently small $x$, we have $|f(x)| \leq C|g(x)|$.

Hope this helps.

PS : L'Hopital is overkill for this kind of problems ^^