Proof of Dirichlet's Theorem on Primes using $\sum_{\substack{p=1\\ p\equiv h\bmod k}}^\infty\frac{\ln p}{p^s}\sim\frac{1}{\varphi(k)(s-1)}$

80 Views Asked by At

In my analytic number theory course, my we just finished proving Dirichlet's Theorem using primes in progression using the method's found in Apostol's Introduction to Analytic Number Theory. My professor also said that another method of proof is to show that

$$\sum_{\substack{p=1 \\ p \equiv h \bmod k}}^\infty \frac{\ln p}{p^s} \sim \frac{1}{\varphi(k)(s - 1)}.$$

I am looking for a paper or any proof that uses this method, as I was unable to find one online or even using approach$0$.

1

There are 1 best solutions below

2
On BEST ANSWER

The major aspects of the proof are implicit in most treatments. I don't know where this is written explicitly, but I'll sketch a proof of the claim and how this implies Dirichlet's Theorem. The main idea is that $$ \sum_{p \equiv h \bmod k} \frac{\log p}{p^s} $$ is essentially a sum of logarithmic derivatives of Dirichlet $L$-functions, and one can replicate the proof of the prime number theorem on each of these $L$-functions and add them together.

One statement of Dirichlet's theorem is that there are infinitely many primes congruent to $h \bmod k$ if $\gcd(h, k) = 1$. This is a easier statement to prove. But implicit in the OP statement is the proof that there are approximately $\frac{1}{\varphi(k)} X$ many primes in this arithmetic progression of size up to $X$. Below, I sketch this latter result. And at the end, I note how one can improve the error term.


We begin with

$$ \sum_{p \equiv h \bmod k} \frac{\log p}{p^s} = \sum_{p} \frac{\log p}{p^s} \delta_{[p \equiv h]}(p),$$ where $$ \delta_{[p \equiv h]}(p) = \begin{cases} 1 & p \equiv h \\ 0 & \text{else} \end{cases}$$ is an indicator function (written as a cross between an Iverson bracket and a Kronecker $\delta$). Character theory/orthogonality shows that $$ \delta_{[p \equiv h]}(p) = \frac{1}{\varphi(k)} \sum_\chi \overline{\chi(h)} \chi(p), $$ where the sum is over Dirichlet characters mod $k$. Thus $$ \sum_{p \equiv h \bmod k} \frac{\log p}{p^s} = \frac{1}{\varphi(k)} \sum_\chi \overline{\chi(h)} \sum_p \frac{\chi(p) \log p}{p^s} \tag{1} $$ and the problem is reduced to studying these last series. A slight bump is that this last series isn't actually the right series to look at, but it's very close to it. The "right" series is $$ \sum_{n \geq 1} \frac{\chi(n) \Lambda(n)}{n^{s}} = - \frac{L'(s, \chi)}{L(s, \chi)}, $$ where $\Lambda(n)$ denotes von Mangoldt's function, $$ \Lambda(n) = \begin{cases} \log p & n = p^\ell \text{ for some } \ell \in \mathbb{N}, \\ 0 & \text{else}. \end{cases} $$

The von Mangoldt function includes higher powers, but there just aren't enough higher powers to meaningfully contribute. That is, we have that $$ \sum_p \frac{\chi(p) \log p}{p^s} = \sum_{p} \frac{\chi(p) \Lambda(p)}{p^s} = \sum_{n} \frac{\chi(n) \Lambda(n)}{n^s} - \underbrace{\sum_{\substack{p \geq 1 \\ \ell \geq 2}} \frac{\chi(p^\ell) \log p}{p^{s \ell}}}_{\text{call this } E(s)}, $$ and for $s$ with $\mathrm{Re}(s) > \frac{1}{2}$, one can show that $E(s)$ converges absolutely. Thus all the growth of $\sum_{p \equiv h} \frac{\log p}{p^s}$ comes from the primes, not the prime powers. More precisely, all poles with $\mathrm{Re}(s) > \frac{1}{2}$ come from the sum over primes, not the sum over prime powers. As we look only near $\mathrm{Re}(s) \geq 1$, we can consider $(2)$ in place of $(1)$ and get the same polar description.

From $(1)$, this means that $$ \sum_{p \equiv h \bmod k} \frac{\log p}{p^s} \approx \frac{1}{\varphi(k)} \sum_\chi \overline{\chi(h)} \left( - \frac{L'(s, \chi)}{L(s, \chi)}\right), \tag{2} $$ where I use $\approx$ here to show that this isn't an equality — but as we've just shown, any polar behavior for $\mathrm{Re}(s) > \frac{1}{2}$ must be precisely the same.

We can now cast the question entirely in terms of complex analysis. The point is to show that $(2)$ has a pole at $s = 1$ (and further that the residue of this pole is $1$). Just as with the zeta function, the poles of $L'/L$ have two sources: poles of $L(s, \chi)$ and zeros of $L(s, \chi)$ — and each contribute a single pole.

We consider the shape of these arguments in turn.

  1. First, we note that $L(s, \chi)$ has a pole at $s = 1$ if and only if $\chi$ is the trivial character mod $k$. This is probably also in whatever proof you've already learned. But when $\chi$ is a nontrivial character, one can show using partial summation (or Dirichlet's test, named for this application by Dirichlet) that $L(s, \chi)$ converges for $\mathrm{Re}(s) > 0$. When $\chi$ is trivial, we approximately have $\zeta(s)$.

  2. We turn to the the zeros of $L(s, \chi)$. One must show that $L(1 + it, \chi) \neq 0$, which typically uses an argument from Gauss sums. This should appear in every proof of Dirichlet's theorem on primes in arithmetic progressions.

    As $L(s, \chi)$ has meromorphic continuation to $\mathbb{C}$ none are $0$ at $1$, this shows a (completely implicit, hard to understand) neighborhood around $s = 1$ that has no zeros. And similarly, the lack of zeros on the line $\mathrm{Re}(s) = 1$ gives a (completely implicit, hard to understand) zero-free region.

In particular, $\frac{L'}{L}(s, \chi)$ has most one pole with $\mathrm{Re}(s) \geq 1$ and meromorphic continuation to all of $\mathbb{C}$. Looking back at $(2)$, let's extract the leading behavior. The only pole in the RHS of $(2)$ comes from the trivial character $\chi_0$. Writing the residue of $L(s, \chi_0)$ as $c_k$ at $1$ (which we don't bother to work out), we see for $s$ near $1$ that $$ L(s, \chi_0) \sim \frac{c_k}{s - 1} \implies L'(s, \chi_0) \sim \frac{-c_k}{(s-1)^2} \implies - \frac{L'(s, \chi_0)}{L(s, \chi_0)} \sim \frac{1}{s-1}. $$ Thus near $s = 1$, we have now shown that $$\sum_{\substack{p=1 \\ p \equiv h \bmod k}}^\infty \frac{\ln p}{p^s} \sim \frac{1}{\varphi(k)} \sum_\chi \overline{\chi(h)} \left( - \frac{L'(s, \chi)}{L(s, \chi)}\right) \sim \frac{1}{\varphi(k)} \overline{\chi_0(h)} \frac{1}{s - 1} = \frac{1}{\varphi(k)(s - 1)}$$ for $s$ near $1$. Further, we've shown there are no other poles with $\mathrm{Re}(s) \geq 1$.

The theorems of Hadamard or de la Vallee Poussin (precisely as in their proofs of the prime number theorem) now show precisely that $$ \pi(X; h, k) := \sum_{\substack{p \leq X \\ p \equiv h \bmod k}} 1 = \frac{X}{\varphi(k)} + o(X). $$


I will note that more powerful statements of Dirichlet's theorem (say, with error terms) can be easily stated (though not easily proved) from here. The point is to study $(2)$ for $\mathrm{Re} (s) < 1$, and in particular to study nontrivial zeros of $L(s, \chi)$ in the critical strip.

More broadly, just as that better understanding the zeros of $\zeta(s)$ gives better error terms for the prime number theory, so understanding zeros of $L(s, \chi)$ gives better terms for Dirichlet's theorem.


Some references to read more about this style of result would be Montgomery and Vaughan's book or Davenport's book on multiplicative number theory. Both of these will actually go into the further detail I commented on just above, studying zero free regions (which is rather delicate work, more delicate than for $\zeta(s)$). So they describe too much, rather than too little. But everything I've stated will appear at least implicitly in their presentation. And certainly they will include the pieces I omitted from the proof, such as showing $L(1 + it, \chi) \neq 0$, convergence, and character orthogonality.

The other piece that I skimmed was the application of the work of de la Vallee Poisson and Hadamard, which is a rather delicate integral analysis. Davenport and MV prove much more sophisticated details about the zeros and bounds of $L'/L$ in the critical strip, and as a result can apply easier complex analysis (e.g. Perron's formula directly).

But the sketch given above and a direct application of an integral argument really is sufficient. As a final note, I prove this. Denote $$ \vartheta(X, \chi) := \sum_{p \leq X} \chi(p) \log p = \sum_{n \leq X} \chi(n) \Lambda(n) + O(\sqrt{X}). $$ Then one can show that $$ \int_1^\infty \frac{\vartheta(x, \chi) - \delta_{[\chi = \chi_0]} x}{x^2} dx$$ converges by using the Analytic Theorem in Zagier's paper on Newman's short proof of the prime number theorem. [The innovation here is that no other analytic information about $(2)$ aside from what we showed above is required; the cost is that we only have leading-order growth, no error term]. This implies that $\vartheta(X, \chi_0) \sim X$. (For nontrivial characters, one can use the easier bound $\vartheta(X, \chi) = o(X^{3/4})$).

To deduce Dirichlet's theorem from this, we use $$ \frac{1}{\varphi(k)} \big[ X + O(X^{3/4}) \big] = \frac{1}{\varphi(k)} \sum_{\chi} \overline{\chi(h)} \vartheta(X, \chi) = \sum_{\substack{p \leq X \\ p \equiv h \bmod k}} \log p,$$ which again follows by the same character orthogonality relation and the bounds for $\vartheta(X, \chi)$ from just above. Then on the one hand, we get the trivial upper bound $$ \sum_{\substack{p \leq X \\ p \equiv h \bmod k}} \log p \leq \sum_{\substack{p \leq X \\ p \equiv h \bmod k}} \log X \leq \pi(X; h, k) \log X \implies \pi(X; h, k) \geq \frac{X}{\varphi(k) \log X} + O(X^{3/4}).$$ On the other hand, for any $\epsilon > 0$, $$ \sum_{\substack{p \leq X \\ p \equiv h \bmod k}} \log p \geq \sum_{\substack{X^{1 - \epsilon} \leq p \leq X \\ p \equiv h \bmod k}} (1 - \epsilon) \log X = (1 - \epsilon) \log X \Big[ \pi(X; h, k) + O(X^{1 - \epsilon}) \Big]. $$ This implies that $$ (1 - \epsilon) \pi(X; h, k) + O_\epsilon(X^{1 - \epsilon}) \leq \frac{X}{\varphi(k) \log X} + O(X^{3/4}) $$ for all $\epsilon > 0$. Together, these sandwich the count and show $$ \pi(X; h, k) \sim \frac{1}{\varphi(k)} \frac{X}{\log X}.$$