If $X$ is a random variable and $X_i$ the ith result of an experiment whose underlying probability distribution is $X$ then by the law of large numbers
$$\lim_{n \rightarrow \infty} \sum_{i=1}^n \frac{X_i}{n} = \overline{X}.$$
I would therefore expect
$$\sum_{i=1}^n X_i - n \times \overline{X} \rightarrow 0.$$
If I attempt to do this numerically, this is not what I find. In fact, it does not seem that the sum converges at all. Why does this sum not converge?
The code I've used to test this is below (in Python)
from random import randint
sum = 0
for i in range(1, 1000):
sum = sum + randint(1, 100)
print("{}: {}".format(i, sum-i*50.5))
There is no reason to expect that $$\sum_{i=1}^n X_i - n \overline X \to 0.$$ Consider that, say, $$\frac{n+1}{n} \to 1$$ as $n \to \infty$, but $$(n + 1) - n \cdot 1 = 1 \not\to 0.$$ These are nonrandom sequences, but I think they illustrate the conceptual mistake.
To go into a little more detail, let $X_1, X_2, \dotsc$ be i.i.d. random variables with mean $\mu$ (i prefer this over $\overline X$ to avoid confusion with the sample mean) and standard deviation $\sigma$. Then a quick calculation shows that $$\text{Var} \left( \sum_{i=1}^n X_i - n \mu \right) = n \sigma^2,$$ so that the standard deviation of $\sum_{i=1}^n X_i - n \mu$ is $\sqrt n \sigma$. This tells you that the standard deviation of $\sum_{i=1}^n X_i - n \mu$ increases with $n$. On the other hand, if we divide everything by $n$, we get something with standard deviation $\sigma / \sqrt n$, i.e. something whose standard deviation decreases with $n$.