Some questions in the proof of Analytic Large Sieve

89 Views Asked by At

I am learning about the analytic large sieve from the lecture notes here:http://www.math.tau.ac.il/~rudnick/courses/sieves2015.html . I have some question in lecture 15:http://www.math.tau.ac.il/~rudnick/courses/sieves2015/LargeSieve2.pdf which I am posting here.

In the proof of proposition 2.1 I am not able to understand how cauchy schwartz inequality implies that $|\psi |^2 = \sum_n| \sum_r a_n b_r e(n \alpha_r)|^2 \leq \sum_n |a_n|^2 \sum| \sum b_r e( n \alpha_r)|^2$.

On page 5 , in the section of an extremal problem arising from the large sieve, I am not able to deduce the following inequality $\sum_{ M < n \leq M+X} | \sum_r b_r e(n \alpha_r)|^2 \leq \sum_n F(n+M)|\sum_r b_r e(n \alpha_r)|^2 = \sum_{r,s} b_r \bar{b_s} \sum_m e(-m M) \hat{F}( m - ( \alpha_r- \alpha_s))$

Can you please help me deducing them?

1

There are 1 best solutions below

3
On

I should preface by saying that I didn't read the various linked pages.

The first inequality $$ |\psi |^2 = \sum_n| \sum_r a_n b_r e(n \alpha_r)|^2 \leq \sum_n |a_n|^2 \sum_n| \sum_r b_r e( n \alpha_r)|^2 $$ is a weak inequality, and a bit different than Cauchy-Schwarz. To see it, write $$B(n) = \Big \lvert \sum_r b_r e(n \alpha_r) \Big \rvert^2 $$ for notational simplicity. Then $$\sum_n| \sum_r a_n b_r e(n \alpha_r)|^2 = \sum_n \lvert a_n \rvert^2 B(n) \leq \sum_n \lvert a(n) \rvert^2 \sum_n B(n).$$ The latter inequality follows from the fact that the right hand side includes all the terms on the left, but has many other (nonnegative) terms also.

This seems unrelated to the second portion of your question. I didn't look up the notation that you use there.


(Later edit to include second portion of question)

Let $F(x)$ be a Schwarz function on $\mathbb{R}$ satisfying $F(x) \geq \mathbf{1}_{[1, X]}(x).$ In practice we think of $F$ as a "smooth approximation to the indicator function", and for this application we assume that it's always an overestimate. Notice this also implies that $F(x) \geq 0$.

Then the inequality from the second portion of the question follows from $$ \sum_{M < n \leq M + X} B(n) = \sum_n \mathbf{1}_{[1, X]}(n + M) B(n) \leq \sum_n F(n + M) B(n). $$ Stated in words, we recognize the restriction of $n$ to the interval $[M + 1, M + X]$ through an indicator function, and then use $F$ as an overapproximation for that indicator function.

The final equality follows from Poisson summation. Let's do that now.

We start with $$ \sum_n F(n + M) \left \lvert \sum_{r} b_r e(n \alpha_r) \right \rvert^2 = \sum_{r, s} b_r \overline{b_s} \sum_n F(n + M) e(n(\alpha_r - \alpha_s)).$$ This equality follows from multiplying out the squared series via $\lvert z \rvert^2 = z \overline{z}$ and recollecting. We want to apply Poisson summation to the sum over $n$. It might be useful to write $$ f(n) := F(n + M) e(n(\alpha_r - \alpha_s)), $$ so that we want to use Poisson summation to study $$ \sum_n F(n + M) e(n(\alpha_r - \alpha_s)) = \sum_n f(n) = \sum_m \widehat{f}(m). $$ It only remains to compute $\widehat{f}(m)$.

Here, I'll note that the lecture notes have a minor error that doesn't affect subsequent arguments. (Or possibly I've made a minor error that doesn't affect subsequent arguments). The linked notes implicitly compute $\widehat{f}$ to be $$ \widehat f (y) \overset{?}{=} \widehat{F}(y - (\alpha_r - \alpha_s)) e(-yM),$$ but I don't think this is correct. Instead, we compute $$ \begin{align} \widehat{f}(y) &= \int_\mathbb{R} F(x + M) e(-x(y - (\alpha_r - \alpha_s))) dx \qquad (x \mapsto x - M) \\ &= e(M(y - (\alpha_r - \alpha_s))) \int_\mathbb{R} F(x) e(-x(y - (\alpha_r - \alpha_s))) dx \\ &= e(M(y - (\alpha_r - \alpha_s))) \widehat{F}(y - (\alpha_r - \alpha_s)). \end{align}$$

Using this instead, we ultimately find that $$ \begin{align} \sum_{M < n \leq M + X} B(n) &\leq \sum_n F(n + M) B(n) \\ &= \sum_{r, s} b_r \overline{b_s} \sum_m \widehat{F}\big(m - (\alpha_r - \alpha_s)\big) e\big(M(m - (\alpha_r - \alpha_s))\big). \end{align} \tag{1}$$

To prove the large sieve, one chooses $F$ in $(1)$ such that both $F(x) \geq \mathbf{1}_{[1, X}(x)$ and $\widehat{F}(y) = 0$ for $\lvert y \rvert \geq \delta$ for some specified $\delta$. Applying such a function to $(1)$ with a sequence $\{ \alpha_j \}$ that is $\delta$-spaced, we find that when $r \neq s$ the argument to $\widehat{F}$ is at least $\delta$ away from the nearest integer, and thus is $0$. When $r = s$, the sum over $m$ in $(1)$ consists of $\widehat{F}(m)$, which vanishes unless $m = 0$. Thus for this special class of $F$, we find that $(1)$ equals

$$ \widehat{F}(0) \sum_r \lvert b_r \rvert^2, $$

which is what the linked notes call equation $(3)$.

In summary, the notes have a small mistake that doesn't affect the intended application to the proof of the large sieve.