I read the textbook Probability: Theory and Examples. 5th Edition by Durrett. Here is theorem 2.2.6 from the textbook.
Let $\mu_n=E(S_n)$, $\sigma_n^2=var(S_n)$. If $\sigma_n^2/b_n^2 \to 0$ then
$\frac{S_n-\mu_n}{b_n} \to 0$ in probability.
The textbook mentioned that $S_n$ can be any sequence of random variables. There is no independent and identical distributed assumptions. By this theorem, let $S_n=\sum_{i=1}^nX_i$, $b_n=n$, I would like to claim the statement below.
$X_1, X_2, ..., X_n$ are random variables, $S_n=\sum_{i=1}^nX_i$, $\sigma_n^2=var(S_n)$. If $\sigma_n^2/n^2 \to 0$ then
$\frac{S_n}{n} \to \frac{E(S_n)}{n}$ in probability.
I am just wondering if my statement is correct?