In his excellent book, How Not to Be Wrong: The Power of Mathematical Thinking, Jordan Ellenberg makes the following statement on page 142:
There are some deviations from pure randomness whose small effects number theorists know how to handle.
He then gives the example of G. H. Hardy and J. E. Littlewood's analysis of how the number of twin primes should in fact be about $32\%$ greater than $N/(log N)^2$
I would be very interested in learning more about this. Are there other examples besides the twin prime problem where deviations from pure randomness are well understood by existing number theory?
Is there a general theorem that is applicable in these types of analyses?