Trouble understanding how a big-o bound proves a limit

89 Views Asked by At

I’m trying to understand the answer to exercise 4.5.2-10(c) in Knuth’s The Art of Computer Programming. Let $q_n$ denote the number of ordered pairs of integers $(u,v)$ in the range $1\leq u,v \leq n$ such that $u$ and $v$ are relatively prime. We’re trying to prove that

$$ \lim_{n\rightarrow\infty} \frac{q_n}{n^2} = \sum_{k\geq 1} \frac{\mu (k)}{k^2} $$

where $\mu$ denotes the Möbius function. We already showed in part (b) that $q_n = \sum_{k\geq 1} \mu (k)\lfloor n/k \rfloor ^2$.

I’ve already read the (rather terse) answer that Knuth provides, in which he asserts that

$$q_n - \sum_{k\geq 1} \mu (k) (n/k)^2 = O(n H_n) - O(n).$$

I have convinced myself on paper that this is true, but I’m having trouble understanding how this proves the limit. Since $nH_n$ increases as a function of $n$, it seems to me that this should imply the two functions only get further apart as $n$ increases. Any insight would be much appreciated :)

1

There are 1 best solutions below

0
On BEST ANSWER

$H_n=o(n)$, so $O(nH_n)=o(n^2)$; also $O(n)=o(n^2)$. Then you get the claim dividing both sides of the last equality by $n^2$.