For a random variables $X_1,X_2,\ldots, X_n$, the law of large numbers says that the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed, i.e.,
$$\lim_{n\rightarrow\infty}\sum_{i=1}^n\frac{X_i}{n}=\bar{X},\tag{1}$$
where $\bar{X}$ is the expected value.
But the same doesn't hold for the following:
$$\sum_{i=1}^nX_i-n\bar{X}.\tag{2}$$
In fact, $(2)$ tends to increase in absolute value as $n$ increases.
I am finding it very surprising and hard to understand. Because I expected that if the average tends to expected value, then the sum should tend to $n\times$expected value.
Let $Y=\sum_{i=1}^nX_i-n\bar{X}$, then
$$\frac{Y}{n}=\frac{\sum_{i=1}^nX_i}{n}-\bar{X}\tag{3}.$$
The way I try to convice myself is as follows: when $n\rightarrow\infty$, the LHS of $(3)$ tends to zero, thus $(1)$ will hold. But $(2)$ will not converge to zero, because the sum of the fluctuations around the expected value sum up and increase, thus forming divergent series.
Any help on obtaining a better intuition behind the differences between $(1)$ and $(2)$ is appreciated.
I take it we are assuming $X_i$ are iid ($X_i \sim \mathbb{P}_X$) with mean $\mu_X$
To be a bit more precise on (1):
$$\lim_{n \to \infty} \frac1n \sum_1^n X_i = \mu_X\;\;a.s.$$
So the sample mean converges to the mean almost surely (or "with probability 1")
Key here is that the the sample mean forces the variance to go to $0$.
In contrast for $Y_n := \sum_1^n X_i - n\mu_X$ we get an unbounded variance:
$$V[Y_n] = \sum_1^n V[X_i] \xrightarrow{n\to \infty} \infty$$
While the mean stays at $0$:
$$E[X_i] = \sum_1^n \mu_X - n\mu_X = 0$$
So we have an increasingly variable process with stationary mean.
Therefore, the larger the value of $n$, the less likely we are to have $Y_n$ near $0$, so we cannot have $Y_n \to 0,w.p. 1$ (since if $X \not\xrightarrow{p} Y \implies X \not \xrightarrow{a.s.} Y$)