Let $p$ be a prime number and $S\subseteq\{1,2,\cdots,p-1\}$ be a subset such that $|S|>p^{\frac{3}{4}}$. Prove that for every positive integer $m$, there exist $a_{1},a_{2},b_{1},b_{2},c_{1},c_{2} \in S$ such that $$m\equiv a_{1}a_{2}+ b_{1}b_{2}+c_{1}c_{2} \pmod p$$
Suppose there exists $m$ such that this is not true. I thought about the polynomial $$P(x,y) = \prod _{i=1}^{p-1} (x\cdot y -m-i)$$ where $x,y \in \mathbb{Z}_{p-1}^3$. Now this polynomial which has degree $2p-2$ vanishes on $S^6$. Now I tried to use Combinatorial Nullstellensatz but seem it is not applicable, or I didn't make the proper polynomial.
$\newcommand{\Zp}{\mathbb Z_p}$I thought I'll take this opportunity to explain, in some detail, the proof of Bourgain and how it involves "Fourier analysis". It is an elementary proof, it really is! It's taken from Tao-Vu's book on additive combinatorics, chapter $4$. The result is Lemma $4.10$ of the book. I'll be making one correction from the book (where I believe there is a printing error).
I also apologize to the original poster here if they know this proof : this is meant to primarily be for future visitors, and is meant to illustrate a very powerful technique in combinatorics.
The Fourier transform
Essentially, the Fourier transform enters as follows : we define a notion of "expectation" of a function, and it turns out that we can "induce probabilistic structure" on $\mathbb Z_p$ by defining the equivalent of a characteristic function for random variables. That characteristic function is referred to as the "Fourier transform".
Now, if you've seen how the characteristic function is used in classical probability, you will not be surprised to see how it's used here.
We will restrict ourselves to functions with domain $\mathbb Z_p = \{0,1,2,\ldots,p-1\}$ where $p$ is a prime, and perform probabilistic manipulations treating this as our "sample space".
Occasionally, to clarify the situation when there are multiple variables , the notation $\mathbb E_{x \in \Zp}[f] := \frac 1{p} \sum_{x\in \Zp} f(x)$ will be used to show that the expectation is taken by summing over all instances of the variable $x$ which will be in the summand.
The following map, known as the exponential map, plays the role of converting additive structure into multiplicative structure.
Now, we can define the Fourier transform of a function. In order to do this, we define characters , which are functions that carry the multiplicative structure of $e$ into $\Zp$.
We can define the Fourier transform as follows.
The quantities $\hat{f}(\xi)$ are called the Fourier coefficients of $f$. I'm sure that those familiar with analysis can already see the similarities with the continuous-space variant. We prove now, that characters are fundamentally "independent of each other", and can be used to break a function into parts that are "independent of each other" as well.
Now, the above lemma shows that for $\xi \neq \xi', e_{\xi} \neq e_{\xi'}$. We can consider the complex vector space of functions $f : \Zp \to \mathbb C$, and equip it with the inner product $\langle f,g\rangle = \mathbb E_{\Zp}[f\overline{g}]$. This is a vector space of dimension $p$. However, we have found $p$ distinct functions $e_{\xi}, \xi \in \Zp$ which are orthogonal to each other in this system (and orthonormal i.e. have norm $1$). Therefore, using a result from linear algebra, we have the following Fourier inversion formula.
This basically says that $f$ can be split up into "independent" components, each of which can be played around with without affecting the others. The full power of this will be seen in the proof of the lemma.
To finish the introduction and the connection with classical Fourier analysis, we define the convolution of two functions $f,g : \Zp \to \mathbb C$ by $$ (f * g)(x) = \mathbb E_{y \in \Zp} (f(x-y)g(y)) $$ The importance of the convolution is the easily verified identity $$ \widehat{f * g} = \hat{f} \hat{g} $$ for $f,g : \Zp \to \mathbb C$. Note that the convolution of two densities in classical probability is done when you're adding two independent random variables with those distributions. We will see how things work here.
Attacking our problem
Let's see how this beautifully ties together when we prove the following theorem, that I think users would struggle to prove from scratch.
Everywhere in this section, $+$ will denote ordinary addition, and $+_p$ denotes addition modulo $p$. Similarly, multiplication modulo $p$ will be indicated by $\cdot_p$, while ordinary multiplication will not be indicated (i.e. variables will be clumped without a gap e.g. $xy$).
Proof : Define the indicator function $1_B$ of any subset $B \subset \Zp$ by $1_B(x) = 1$ if $x \in B$ and $1_B(x) = 0$ if $x \notin B$. We will use the indicator to track when elements belong to a certain set. Also, for $z \in \Zp, B \subset \Zp$, let $z\cdot_p B = \{z\cdot_p b : b \in B\}$. This is fairly standard notation.
Now define $f : \Zp \to \mathbb R$ by $f(x) = \frac{|\{y\in A : x\in y\cdot_p A\}|}{|A|}$. Observe that because $e_{0} = 1$ we have $$ \hat{f}(0) = \mathbb E_{\Zp} [fe_{0}] = \mathbb E_{\Zp}[f] = \mathbb E_{x \in \Zp} \frac{|\{y\in A : x\in y\cdot_p A\}|}{|A|}\\ = \frac 1{p|A|}\sum_{x\in \Zp\setminus \{0\}}|\{y\in A : x\in y\cdot_p A\}| = \frac{1}{p|A|}\sum_{x \in \Zp \setminus \{0\}} \sum_{y \in A} 1_{y \cdot_p A}(x) \\ = \frac{1}{p|A|} \sum_{y \in A}\sum_{x \in \Zp \setminus \{0\}} 1_{y \cdot_p A}(x) = \frac 1{p|A|} \sum_{y \in A} |y \cdot_p A| = \frac 1{p|A|} \sum_{y \in A} |A| = \frac{|A|}{p} = \mathbb P_{\Zp}(A) $$
Now, let $\xi \in \mathbb Z_p \setminus \{0\}$. We will calculate $\hat{f}(\xi)$ as follows. By definitions, write $$ \hat{f}(\xi) = \mathbb E_{x\in \Zp} [f(x)e_{\xi}(x)] = \frac 1{p|A|} \sum_{x \in \Zp}\sum_{a \in A} e_{\xi}(x) 1_{a \cdot_p A}(x) $$ Make the change of variable $x = a \cdot_p y$. Then, as $x$ runs over $\Zp$ it is also true that $y$ runs over $\Zp$ as $a \neq 0$. $$ \frac 1{p|A|} \sum_{x \in \Zp}\sum_{a \in A} e_{\xi}(x) 1_{a \cdot_p A}(x) = \frac 1{p|A|} \sum_{y \in \Zp}\sum_{a \in A} e_{\xi}(a \cdot_p y) 1_{a \cdot_p A}(a \cdot y) $$ Use $e_{\xi}(a\cdot_p y) = e_{\xi \cdot_{p} a}(y)$ and $ 1_{a \cdot_p A}(a \cdot y) = 1_A(y)$ to write $$ \frac 1{p|A|} \sum_{y \in \Zp}\sum_{a \in A} e_{\xi}(a \cdot_p y) 1_{a \cdot_p A}(a \cdot y) = \frac 1{p|A|} \sum_{y \in \Zp}\sum_{a \in A} e_{\xi \cdot_p a}(y) 1_{A}(y) \\ = \frac 1{|A|}\sum_{a \in A} \frac 1p \sum_{y \in \Zp} e_{\xi \cdot_p a}(y)1_{A}(y) = \frac{1}{|A|}\sum_{a \in A} \hat{1_A}(\xi \cdot_p a) $$
(Note : there seemed to be an error in the previous expression ,in the book's proof. I think I've corrected that here) Since $\xi \neq 0$, we have that $\xi \cdot_p a \neq \xi \cdot_p b$ for $a \neq b$. Using the weighted Cauchy-Schwarz inequality, $$ \hat{f}(\xi) = \frac{1}{|A|}\sum_{a \in A} \hat{1_A}(\xi \cdot_p a) \\ \implies |\hat{f}(\xi)| \leq \frac 1{|A|} \left(\sum_{a \in A} 1\right)^{\frac 12}\left(\sum_{a \in A} \left(\hat{1_A}(\xi \cdot_p a)\right)^2\right)^{\frac 12} $$
Take the equality $\sum_{b \in \Zp} \left(\hat{1_A}(b)\right)^2 = \frac{|A|}{p}$ as an exercise (use the definitions, and sum-interchanges cleverly like I did in an earlier demonstration). Now, because of this equality and the observation stemming from $\xi \neq 0$, we have $$ |\hat{f}(\xi)| \leq \frac 1{|A|} \left(\sum_{a \in A} 1\right)^{\frac 12}\left(\sum_{a \in A} \left(\hat{1_A}(\xi \cdot_p a)\right)^2\right)^{\frac 12} \leq \frac{1}{|A|} |A|^{1/2} \frac{|A|^{1/2}}{p^{1/2}} = \frac 1{p^{1/2}} $$
We will now prove the theorem as follows : we will show that for every $x \in \Zp$, it is true that $$ (f*f*f)(x) > 0 $$ If this is done, then by the definition of $f$ and $*$, it can be seen that $f*f*f$ is non-zero precisely on the set $\{a_1a_2+b_1b_2+c_1c_2 : a_i,b_i,c_i \in A\}$. Since $f*f*f$ is non-zero everywhere, we will be done.
So, begin with $(f*f*f)(x)$. First, observe that $f$ is real valued, hence this is a real number. Now, use the Fourier inversion formula and the convolution's properties to get $$ (f*f*f)(x) = \sum_{\xi \in \Zp} \widehat{f*f*f}(\xi) e_{\xi}(x) = \sum_{\xi \in \Zp}\hat{f}(\xi)^3 e_{\xi}(x) $$ Using the fact that $|e_{\xi}(x)| = 1$ for all $\xi$, and the triangle inequality for the real parts of complex numbers (denoted by $\mathfrak{R}$), $$ \sum_{\xi \in \Zp}\hat{f}(\xi)^3 e_{\xi}(x)\geq \mathfrak{R} \hat{f}(0)^3 - \mathfrak{R}\sum_{\xi \in \Zp \setminus \{0\}} \hat{f}(\xi)^3 $$
Now, we finish by observing \begin{align} \mathfrak{R} \hat{f}(0)^3 - \mathfrak{R}\sum_{\xi \in \Zp \setminus \{0\}} \hat{f}(\xi)^3 & \geq \frac{|A|^3}{p^3} - \frac 1{p^{\frac 12}} \sum_{\xi \in \Zp} \hat{f}(\xi)^2\\ & = \frac{|A|^3}{p^3} - \frac{|A|}{p^{\frac 32}} >0 \end{align}
because $|A| > p^{\frac 34}$. Hence, the exercise is completed.
The probabilistic/combinatorial undertones
Remember that a field is closed under both addition and subtraction. Therefore, if $A$ was a field by itself, then the set shown to be equal to $\Zp$ above, would have just equaled $A$ itself. The above result shows that it takes very little for $A$'s linear combinations and scalar multiplications to reach $\Zp$.
This can be contrasted with section $2.8$, where sum-product estimates are discussed. Indeed, the sentiment expressed above and in this section is that the sums and products of elements of $A$ have a larger cardinality than $A$, and if this is not the case then $A$ is either a subfield or very close to one.
To provide the probabilistic undertones of the proof, think about the set $\{a_1a_2+b_1b_2+c_1c_2: a_i,b_i,c_i \in A\}$. One can imagine six different "random variables" $X_i, i=1,\ldots,6$ which are "uniformly distributed" in $A$. What the proof does is shows that under this notion of interpretation, the "probability" that $X_1X_2+X_3X_4+X_5X_6 = x$ is positive for every $x > 0$ , so that the set $\{a_1a_2+b_1b_2+c_1c_2 = x, a_i,b_i,c_i \in A\}$ must be non-empty.
How it does this is by considering the "characteristic function" of that random variable (i.e. the Fourier transform of $f*f*f$) and then showing that $\hat{f}(0)$ is much, much larger than $|\hat{f}(\xi)|$ for any $\xi \neq 0$. Fundamentally, that comes down to the following fact, though: $\sum_{a \in \Zp} e_{\xi}(x)1_{a\cdot_p \Zp}(x) = p$ for $\xi = 0$ , and it equals $0$ for $\xi \neq 0$. (Note : $1_{x \in \Zp}(x) = 1$ always, but see the next paragraph).
Therefore, when $A$ is a large set really close to $\Zp$, we must have that $\sum_{a\in A} e_{\xi}(x) 1_{a \cdot_p A}(x)$ will be close to $p$ when $\xi=0$ , and be close to $0$ when $\xi \neq 0$. This behavior is precisely reflected in the Fourier coefficient bounds.
Then, the Fourier coefficients show that when $A$ is small enough, the real part of $\hat{f}(0)$ (i.e. the size of $A$) is larger than the potential imaginary parts of all other Fourier coefficients put together, which concludes the proof.