I've heard many times that the distribution of the non-trivial zeros of the Riemann zeta function are hypothesized to match that of the eigenvalues of a random Hermitian matrix (see Wikipedia or this blog post by Terence Tao). However the following detail has always bothered me:
There are many variants of Riemann's explicit formula relating the zeta function to the distribution of primes, such as
$$\sum_{m \geq 1,~p^m\leq x} \log p = x - \log 2\pi - \sum_\rho \frac{x^\rho}{\rho}$$
Such formulas typically involve an explicitly piecewise constant function on the left-hand side, and a sum over the zeros $\rho$ of the zeta function on the right. However just glancing at the right-hand side, it's not at all obvious to me that it should be piecewise constant (except that it's equal to the LHS). This leads to the following related questions:
Are there simple constraints on the non-trivial zeros of $\zeta(x)$ that make the right-hand side manifestly piecewise constant?
How does this constraint apply to the conjectured connection to eigenvalues of random matrices? To be slightly more specific, when Terence Tao summarizes this conjecture, he states "we have the [above] hypothesis which appears to accurately model the zeta function, but does not capture such basic properties of the primes as the fact that the primes are all natural numbers." But doesn't the hypothesis fail to capture the far more basic fact that counting functions must be piecewise constant? This quote makes it sound like this property follows from something trivial, and only the fact that discontinuities appear at integer values is a non-trivial fact that does not follow easily from the hypothesis.
I posted this question on the comment section of Terrance Tao's blog, and he responded. Posting his answer here for convenience:
"In fact none of the plausible models for the distribution for the zeta functions are able to detect the piecewise constancy of the prime counting function. This can be explained in terms of the uncertainty principle (see e.g., the discussion at the end of Section 2 of this blog post of mine): when trying to use the zeta function $\zeta(1/2+it)$ to understand sums like $\sum_{n \leq x} \Lambda(n)$, the uncertainty principle tells us that
$$\Delta t \times \Delta \log x \gg 1$$
or by the chain rule
$$\Delta t \times \Delta x \gg x$$
To see the piecewise constancy of the prime counting function, one has to resolve the spatial uncertainty down to unit scales, so we need $\Delta x \ll 1$, which by the uncertainty principle forces $\Delta t \gg x$. That is to say, we need information on the zeroes on an interval of length $\gg x$ before we could even hope to detect this piecewise constancy. On the other hand, GUE and related models only cover intervals in $t$-space of length $O(1)$ (or even $O(1/\log T)$) at best, and so have nowhere near the resolution to see these effects; as far as the GUE model (or any other local model for the zeroes) is concerned, the integers and primes may well be continuously distributed.
It would be a major breakthrough if there was some new way to exploit the discrete nature of the integers and primes that would be visible on the zeta function side, thus circumventing this uncertainty principle barrier. The most obvious instance of this is the functional equation, which ultimately derives from the Poisson summation formula applied to the integers which one can view as a discrete subgroup of the reals. Some of the more advanced bounds on exponential sums related to the zeta function also rely more heavily on the arithmetic structure of the integers, but again not at anywhere near the resolution needed to say much about zeroes on the critical line (though they can help for instance with zero free regions and some zero density estimates, as well as subconvexity bounds)."