Small gaps between primes Maynard-Tao sieve why it works

813 Views Asked by At

TLDR: How do we know the probability distribution that Maynard obtains in the small gaps between primes problem is $\mathbb{P}(n+h_{i}) \asymp \frac{\log k}{k}$.

I'm interested in why the sieve theory method that is used by James Maynard (Maynard-Tao sieve) is able to prove the small gaps between primes result but the sieve theory method used by Goldston-Pintz-Yildirim (GPY) fails.

I came across the following similar question: Maynard-Tao vs GPY sieve weights for bounded gaps between primes

From the above I see that the GPY method fails because the ratio they wanted to calculate cannot be greater than $4$.

I'm confused (again from the above link) how they arrive at the points made about the Maynard weights and the probability density that is found.

Looking at the Maynard paper: https://arxiv.org/pdf/1311.4600.pdf

Page 23 equation 8.19 shows that $M_{k} \geq \frac{\log k}{k}$ after a trivial rearrangement. I'm assuming that is how they got "the weights that are used are such that the ratio is about $\frac{\log k}{k}$.

I still don't understand how a probability distribution fits into this. Specifically one knows that the distribution Maynard obtained was such that $\mathbb{P}(n+h_{i}) \asymp \frac{\log k}{k}$.

I'm aware of how probability fits into the method in the sense that one is using weights to try and increase the probability that the translates $(n+h_{i})$ are prime (equation 2.1 in Maynard's paper).

Update:

Having read some more of the Maynard paper we only require a lower bound $M_{k}$. The lower bound is such that the probability distribution is obtained (I think???). At least we obtain $M_{k} \geq \frac{kJ_{k}}{I_{k}} \geq \log k - 2 \log \log k$. Which after dividing by $k$ we have $\frac{J_{k}}{I_{k}} \geq \frac{1}{k}(\log k - 2\log \log k).$ Which I presume leads to the distribution we require.