Intuition behind Roth's Theorem of 3 term AP

523 Views Asked by At

Roth's theorem in arithmetic progressions states that if $A$ is a subset of the positive integers such that $$ \lim_{N\to\infty} \sup \frac{|A \cap \{1,\dots,N\}|}{N} > 0,$$ then $A$ contains a 3-term arithmetic progression.

Equivalently, Roth's theorem can be rephrased as $R_3(N) = o(N)$, where $R_3(N)$ denotes the size of the largest possible subset of $\{1,\dots,N \}$ that does not contain a 3 term AP. Henceforth we will use $\delta= \delta(N)= \frac{R_3(N)}{N}$.

Roth actually proved $$R_3(N) = O \bigg(\frac{N}{\log(\log(N))} \bigg).$$

I will briefly state the key parts of the proof alongside with my doubts.

As far as I understand, one begins by choosing a set $A \subset \{1, \dots, N \}$ of size $\delta N$. The integral $$ \int_{0}^{1} S(\alpha)^2 S(-2 \alpha) d \alpha,$$ where $S(\alpha)= \sum_{n=1}^{N}1_A(n)e(\alpha n)$, equals the number of 3 term arithmetic arithmetic progressions in $A$ (by orthogonality relations). Since $A$ was chosen so as not to contain any proper 3-term AP, we conclude that such integral equals $\delta N = |A|$.

Then we set $T(\alpha) = \delta_1 \sum_{n=1}^{N}e(\alpha n)$, where $\delta_1 = \frac{R_3(N_1)}{N_1}$ with $N_1 < N$.

The key idea seems to be to compare $S$ and $T$. Namely, we obtain lower and upper bounds for $\delta$, by computing $$ \int_{0}^{1} S(\alpha)^2 T(-2 \alpha) d \alpha,$$ by respectively considering the direct definition of $T$ and by considering the function $C(\alpha)= T(\alpha) - S(\alpha)$. In doing so, we make use of this $\textbf{crucial lemma}$:

Suppose $ N_1^2 < N$, then $$ ||C||_{\infty} = O(|\delta_1 - \delta|N + N_1^2)$$.

$\textbf{First doubt}$: The proof of this lemma seems to use the Circle method and is explained, although with many details left to the reader, in Vaughan's book- Hardy Littlewood method, and is the same as the one originally given by Roth. I have been told there are shorter and somewhat "nicer" proofs of this lemma. I would like to know where I can find them, or at least know what is the main idea behind the proof of this lemma.

Moving on, after some calculations and using the lemma we arrive at \begin{equation} \delta_1 \delta << |\delta_1 - \delta| + \frac{N_1^2}{N} \qquad (1), \end{equation} which by taking limits readily implies $R_3(N)=o(N)$. This I understand. However I simply cannot fathom:

$\textbf{Last doubt}$: How does this last inequality imply $$R_3(N) = O \bigg(\frac{N}{\log(\log(N))} \bigg)?$$ Because everybody seems to assume it automatically follows from the last inequality. Can anybody explain me how?

Thank you very much.

1

There are 1 best solutions below

0
On BEST ANSWER

By 2017, there are several proofs of Roth's Theorem. And they are all still being examined. Let's try one by Ernie Croot

We present a proof of Roth's theorem that follows a slightly different structure to the usual proofs, in that there is not much iteration. Although our proof works using a type of density increment argument (which is typical of most proofs of Roth's theorem), we do not pass to a progression related to the large Fourier coefficients of our set (as most other proofs of Roth do). Furthermore, in our proof, the density increment is achieved through an application of a quantitative version of Varnavides's theorem, which is perhaps unexpected.

arXiv:0801.2577 A New Proof of Roth's Theorem on Arithmetic Progressions

Having not read a single page of the a paper we learn two things:

  • some proofs involve iteration
  • this proof uses Density Increment (others choose Energy Increment)
  • Varnavides' theorem should play a role.

He defines $r_3(n)$ to be the largest subset of $S \subseteq [N] = \{ 1,2,\dots, N\}$ for which there are no solutions to the equation: $$ x + y = 2z \quad\text{and}\quad x,y,z \in S \quad\text{and}\quad x \neq y $$ The theorem - rather succinctly - says $\boxed{{\color{#BAA84B}{r_3(N)}} = {\color{#3057E8}{o(N)}}}$ and his starting point is the same as yours

$$ \mathbb{E}_{x,d \in \mathbb{F}_p} \big[ f(x)\, f(x+d) \, f(x+2d) \big] = \sum_{r \in \mathbb{F}_p} \hat{f}(r)^2 \hat{f}(-2r) $$

so that we can count the 3AP (arithmetic progressions) by examining the Fourier coefficients.

The proof is 5 pages - but it may be longer if you ``unpack" the various theorems he uses.


This is certainly not the only proof:

  • [1] uses Triangle Removal Lemma
  • [2] a thorough discussion of both density increment and energy incremement. A bit hard to read at times, but insightful.
  • [3] Roth's theorem over finite fields $\mathbb{F}_3^n$ as discuss at Carnegie Mellon Computer Science class.
  • [4] Timothy Gowers discusses 4-AP

I would argue that no analysis proof is intuitive. Instead we have to examine for the intuitions that were broken.