Consider a sequence of random variables $X_1,X_2,...,X_n$. No assumptions abou independence is made. Only joint probability density function is known, i.e. $f(x_1,...,x_n)$. Then Markov's theorem states that for any constant $\epsilon>0$: $$\underset{n\rightarrow \infty }{lim} \mathbb{P}\{ \left | \frac{1}{n}\sum_{k=1}^{n}X_k-\frac{1}{n}\sum_{k=1}^{n}\textbf{E}X_k \right |<\epsilon \}=1$$ if $$\frac{1}{n^2}Var\left ( \sum_{k=1}^n X_k \right )\rightarrow 0.$$ This is a very general formulation of law of the large numbers. This formulation with arbitrary dependence structure can by found in other books as well. But I am interested in the classes of distributions or a model structure when the condition $\frac{1}{n^2}Var\left ( \sum_{k=1}^n X_k \right )\rightarrow 0$ does not hold. For example does this condition fail for heavy tailed distributions? Does the distribution have to decay at certain rate at infinity? Or maybe it cannot have strong coorelations between too many variables? etc.
I could not find any source on this matter. Any information on this issue would be highly appreciated.
UPDATE
Variance of the sum i.e. $Var\left ( \sum_{k=1}^n X_k \right )$ is in fact simply the sum of all entries of the covariance matrix of random variables $X_1,...,X_n$. Denote this covariance matrix by $C$. In addition, denote by $e$ a vector with all entries being equal to 1. Then it is true that $$Var\left ( \sum_{k=1}^n X_k \right )=e^T C e$$ We have also the following bound for the quadratic forms: $$\lambda_{min} x^T x\leq x^T A x\leq \lambda_{max}x^Tx$$ Hence, with our notation we have that $$n\lambda_{min}\leq Var\left ( \sum_{k=1}^n X_k \right )\leq n\lambda_{max},$$ and then deviding by $n^2$ and taking the limit: $$0\leq lim \frac{1}{n^2}Var\left ( \sum_{k=1}^n X_k \right )\leq lim \frac{\lambda_{max}}{n}$$
And the question boils down to the question whether maximal eigenvalue stays bounded for the covariance matrix when the dimension grows. Any suggestions how to attack this problem is welcome.