Iwaniec & Kowalski distribution of additive functions

130 Views Asked by At

I have some questions about section 1.7, "Distribution of additive functions", of Iwaniec and Kowalski's book Analytic Number Theory. Throughout, $f$ denotes an additive function.

  1. They use the estimate $$\sum_{p^\alpha\leq x}|f(p^\alpha)|\ll x^{\frac{1}{2}}D(x)$$ where $D(x)$ is defined by $$D^2(x)=\sum_{p^\alpha\leq x}|f(p^\alpha)^2|p^{-\alpha}\text{.}$$ They claim this follows from Cauchy's inequality. With a view to applying Cauchy's inequality, I would write $$\sum_{p^\alpha\leq x}|f(p^\alpha)|=\sum_{p^\alpha\leq x}p^{\frac{\alpha}{2}}\cdot\frac{|f(p^\alpha)|}{p^{\frac{\alpha}{2}}}\leq \left(\sum_{p^\alpha\leq x}p^\alpha\right)^{\frac{1}{2}}D(x)\text{.}$$ However, in order to get the desired estimate from here, we would need $\sum_{p^\alpha\leq x}p^\alpha\ll x$, which is plainly false. So can someone explain where that estimate comes from?

  2. Defining $$E(x)=\sum_{p^\alpha\leq x}f(p^\alpha)p^{-\alpha}(1-p^{-1})$$ which is the desired approximation for the average of $f$, they use in the proof of Theorem 1.3 that the error term $x^{\frac{1}{2}}|E(x)|D(x)$ is absorbed by the error term $xD^2(x)$. Why is that so?

  3. Finally, in the last sentence of the proof of Theorem 1.3, they claim "Then replacing $x^{-1}\mathcal{M}_f(x)$ by $E(x)$ we make an admissible error". Again I don't see why that is so. It would seem to me that in order to check how $V(x)$ changes when one does this replacement would imply not only good control on $x^{-1}\mathcal{M}_f(x)-E(x)$ (which we have, by (1.103)) but also good control on $\sum_{n\leq x}|f(n)-x^{-1}\mathcal{M}_f(x)|$ (which seems to be more of a problem).

Any help?

1

There are 1 best solutions below

0
On
  1. The correct bound is $\sum_{p^\alpha \leq x} |f(p^\alpha)| \ll \frac{x}{(\log x)^{1/2}}D(x)$, so equation (1.103) should be $$\mathcal{M}_f(x) = xE(x) + O\left(\frac{x}{(\log x)^{1/2}}D(x)\right).$$ If $f(p^\alpha)$ is bounded, we can do better with

$$\mathcal{M}_f(x) = xE(x) + O\left(\frac{x}{\log x}\right).$$

  1. We need to show $E(x)^2 \ll xD(x)^2$. Indeed, by Cauchy-Schwarz, $$\begin{align} E(x)^2 &= \left(\sum_{p^\alpha \leq x} f(p^\alpha) p^{-\alpha} (1-p^{-1})\right)^2\\ &\leq \sum_{p^\alpha \leq x} |f(p^\alpha)|^2p^{-\alpha} \sum_{p^\alpha \leq x} p^{-\alpha}(1-p^{-1})^2\\ &\ll D(x)^2 \log \log x \end{align}$$

  2. We have $$\begin{align}\sum_{n \leq x} \left(f(n) - E(x)\right)^2 &= \sum_{n \leq x} \left(f(n) - x^{-1}\mathcal{M}_f(x) + x^{-1}\mathcal{M}_f(x) - E(x)\right)^2\\ &\leq 2 \ \sum_{n \leq x} (f(n) - x^{-1}\mathcal {M}_f(x))^2 + 2 \ \sum_{n \leq x} (x^{-1}\mathcal {M}_f(x) - E(x))^2\\ &= 2 V(x) + 2 \ \sum_{n \leq x} (x^{-1}\mathcal {M}_f(x) - E(x))^2. \end{align}$$ So we need to show $\sum_{n \leq x} (x^{-1}\mathcal {M}_f(x) - E(x))^2 \ll xD(x)^2$. Indeed, $$\sum_{n \leq x} (x^{-1}\mathcal {M}_f(x) - E(x))^2 \leq \frac{1}{x}(\mathcal{M}_f(x) - xE(x))^2 \ll \frac{x}{\log x}D(x)^2.$$