I apologize if this question is overly vague, but part of the reason I am asking is because I don't know a more precise way of discussing these ideas.
To state a general question:
What, if any, connection is there between $O(N^{1/2})$ (or related ideas if big $O$ is not appropriate) and randomness?
Here are a few things which lead me to this vague (sorry!) question:
I was recently talking with an analytic number theorist and he stated something along the lines of
"Such and such a sequence $\{a_n\}$ has partial sums $\displaystyle\sum_{n<N}a_n$ has asymptotic behavior of $O(N^{1/2})$, so the sequence is essentially random in its distribution."
I would like to understand more what he is referring to.
EDIT: I understand that $O(\sqrt N)$ can occur for very deterministic sequences. That was never on question, so I apologize if my quote was misleading. However, I would like to understand what the reason the he would (admittedly imprecisely and incorrect in general) such a comment would be.
In other contexts, I have seen $N^{1/2}$ related to random walks and the term "root-mean-squared", but these are unknown waters for me.
There is also the appearance of $O(x^{1/2}\log(x))$ in the Riemann hypothesis as the best error bound for the prime number theorem.
Is this somehow related to the "randomness" $N^{1/2}$ implies in other contexts, with the $\log(x)$ term handling the non-random fact that primes become more sparse for $x>>0$?
Any information about these specific things would be greatly appreciated.
Suppose $X_1, X_2, \dots X_n$ is a sequence of independent and identically distributed random variables with mean $0$ and standard deviation $\sigma$. Then the sum $S_n = X_1 + \dots + X_n$ has mean $0$ and standard deviation $\sqrt{n} \sigma$.
There are a large class of results in probability, the simplest being Chebyshev's inequality and the most famous being the central limit theorem, guaranteeing that random variables are not too far from their expected values, where "too far" is measured in terms of the standard deviation. So what the above tells you is that while you might have expected the sum above to vary as much as $O(n)$ from its mean (which occurs if the $X_i$ are not independent but are instead identical, say), in fact independence implies that it only varies about $O(\sqrt{n})$ from its mean. This is a fundamental observation in probability and statistics.