Let $(X_{i})_{i=1}^{\infty}$ be a sequence of independent identically distributed (i.i.d) random variables. Define $\bar{X}:= \mathbb{E}(X_{i})$ the (fixed) expectation of these variables.
We all know that the law of large numbers (LLN) implies that $$ \frac{1}{n}\sum_{i=1}^{n}X_{i}\xrightarrow{p}\bar{X}. $$ as $n\rightarrow\infty$. However, similar quantities like $$ D := \sum_{i=1}^{n}X_{i} - n\bar{X} $$ does not converge, and in fact tends to infinity.
My question is related to this, how would the above difference grow as $n\rightarrow\infty$? i.e. can we find a function $f$ such that $D\sim f(n)$ (maybe need to multiply by $\sigma(X_{i})$, the standard deviation of $X_{i}$)?
In addition, I would also like to know how would the expectation of absolute deviation behave asymptotically, i.e. how does $$ E := \mathbb{E}\left|\sum_{i=1}^{n}X_{i}-n\bar{X}\right| $$ behave asymptotically as $n\rightarrow \infty$? Thanks in advance. It would also be very helpful if someone could help with the specific example when $X_{i}\sim \text{Uniform}[0,1]$.
Edit: This originates from an interview question I had, in which case I was asked to estimate the quantity $E$ from above, where $X_{i}$ is a randomly drawn card from a deck of 52 cards, where Ace = 1, J,Q,K = 11,12,13 respectively, and $n=3$. I was not able to estimate it on the spot, so I formulated this question.
Without loss of generality, let $X_1,\dots$ be i.i.d with mean $E X = 0$ and finite variance $E X^2 <\infty$. Then $X_1+\dots +X_n$ has variance $nE X^2$. Putting $S_n = X_1+\dots+X_n$, the central limit theorem says $S_n/\sqrt{\mathrm{Var}S_n}\Rightarrow \mathcal N(0,1)$. So as $n\to\infty$, $S_n \approx \mathcal N(0,nEX^2)$. It is possible, under the assumption of finite third moment, to bound the error of the approximation using the Berry-Esseen theorem.
As for $E\lvert S_n \rvert$, you can make the same approximation using the central limit theorem. Another way (although this doesn't really answer your question) is to, if you have additional information on $X_1$, to use concentration inequalities. In particular, if $X_1\sim U([0,1])$, then it is natural to use Hoeffding's Inequality. For every $t>0$ we have $$P(\lvert S_n - E S_n \rvert\geq t)\leq 2\exp(-2n^{-1}t^2)$$ Now recall that for a non-negative random variable $Y\geq 0$, $$ E Y = \int_0^{\infty}P(Y\geq t)\,dt$$ , so as $n \to \infty$, we have $E\lvert S_n - ES_n\rvert \to 0$ whenever we can apply Hoeffding's Inequality.