$\lceil\ln(n)\rceil+\frac{(n-1)-\lceil\ln(n)\rceil}p\approx\frac{n}p$ when $n$ is much bigger than $p$. What do "much bigger" and $\approx$ mean here?

57 Views Asked by At

In one book im reading, the author claim the following:

$$\lceil\ln(n)\rceil+\frac{(n-1)-\lceil\ln(n)\rceil}{p} \approx \frac{n}{p}$$

when ''$n$ is much bigger than $p$'' (both positive integers). The exact meaning of ''much bigger'' and $\approx$ (clearly indicating some sort of approximation) is not given, it is treated naturally because either it is obvious or it is assumed the reader is sophisticated enough to figure it out.

Before asserting the stated equivalence, the author surely had some tacit idea of what ''much bigger'' and $\approx$ means. What i would like to know is a reasonable definition for both concepts, so i can derive the result myself. I suspect that a possible definition for ''much bigger'' could be that $p/n$ tends to $\infty$, but i'm not sure how to compute with such a limit.

2

There are 2 best solutions below

3
On BEST ANSWER

In your example, we can look at the error we make when we replace the left side with the right side. We lose the terms $\lceil \log n \rceil(1-\frac 1p)$ and $-\frac 1p$. If $n$ is rather large $\log n$ is much smaller. For example, using base $2$ logs, if $n \approx 1,000,000, \log n \approx 20$. Similarly, the $-\frac 1p$ is a factor $n$ smaller than the $\frac np$ term. Often it is from an expansion in powers of the small term. Here we are led to expect that the fractional errors will be of order $\frac pn$.

A more explicit example that comes up frequently is $(1+x)^n \approx 1+nx$ when $x \ll 1$. We can expand it explicitly and get $$(1+x)^n=1+nx+\frac{n(n-1)}2x^2+\frac {n(n-1)(n-2)}{3!}x^3+\ldots$$ We see if $nx \ll 1$ each successive term is roughly a factor $nx$ smaller than the previous one. If we can accept that much error we can often simplify the calculation significantly.

1
On

I would generally take these sorts of statements as being a variant of the concept of limit. In this case, if we take "much bigger" to mean additively larger, then it means

$\forall (\epsilon>0) \exists D: (n > D+p) \rightarrow |LHS-\frac n p|<\epsilon$

In other words, no matter how small your margin of error is, it's possible to guarantee to get within that margin of error by making sure that the amount by which $n$ is greater than $p$ is above some minimum value $D$.

equivalently,

$\forall (\epsilon>0) \exists D: (n-p> D) \rightarrow |LHS-\frac n p|<\epsilon$

That is, as long as $n-p$ is above some minimum value $D$, your error is less that $\epsilon$.

If "much bigger" means multiplicatively larger, then

$\forall (\epsilon>0) \exists R: (n > Rp) \rightarrow |LHS-\frac n p|<\epsilon$

(If you're not familiar with the acronym, LHS means "Left Hand Side", i.e. the stuff to the left of the "approximately equal" symbol.)