I know that for iid variables the Law of Large Numbers works in the sense that E($\sum_{i=1}^n$ $\frac{X_{i}}{n}$) = μ.
But does this happen if $X_{1}$, ..., $X_{n}$ are independent but not identically distributed? And what about the variance of $\bar{X}$ in this case?
$$ \def\eqdef{\stackrel{\text{def}}{=}}\\ \def\cov{\text{Cov}}\\ \def\var{\text{Var}} $$ The fact that $\ E\left(\sum_\limits{i=1}^n\frac{X_i}{n}\right)=\mu\ $, whenever $\ \mu=E\big(X_i\big)\ $ is the common mean of the random variables $\ X_i\ $, follows from the linearity of expectation. It is not generally referred to as a "law of large numbers" and doesn't require the $\ X_i\ $ to be independent or identically distributed.
In general, if $\ X_1,X_2,\dots,X_n\ $ are any random variables with finite means $\ \mu_i=E\big(X_i\big)\ $, it is true that $$ E\left(\sum_\limits{i=1}^n\frac{X_i}{n}\right)=\sum_{i=1}^n\frac{E\big(X_i\big)}{n}=\frac{1}{n}\sum_{i=1}^n\mu_i\ , $$ and when $\ \mu_i=\mu_j=\mu\ $ for all $\ i\ $ and $\ j\ $ the rightmost of the above sums simplifies to $\ \frac{1}{n}\sum_\limits{i=1}^n\mu_i=\mu\ $. All this requires is that all the $\ X_i\ $ have the same finite mean $\ \mu\ $. They don't need to be identically distributed, nor independent.
The variance of $\ \overline{X}\eqdef\sum_\limits{i=1}^n\frac{X_i}{n}\ $ is a little more complicated in the general case, but it can be expressed in terms of the covariance matrix of $\ X_1,X_2,\dots,X_n\ $, without needing to assume that the $\ X_i\ $ are independent or identically distributed.
If $\ \sigma_{ij}\eqdef E\left(\big(X_i-E(X_i))\big(X_j-E(X_j))\right) $ is the covariance of $\ X_i\ $ and $\ X_j\ $ when $\ j\ne i\ $ (and the variance of $\ X_i\ $ when $\ j=i\ $), the covariance matrix $\ \cov(X)\ $ of $\ X_1, X_2,\dots, X_n\ $ is defined by $$ \cov(X)\eqdef\pmatrix{\sigma_{11}&\sigma_{12}&\dots&\sigma_{1n}\\ \sigma_{21}&\sigma_{22}&\dots&\sigma_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ \sigma_{n1}&\sigma_{n2}&\dots&\sigma_{nn}}\ . $$ The variance $\ \var\big(\overline{X}\big)\ $ of $\ \overline{X}\ $ is given by \begin{align} \var\big(\overline{X}\big)&\eqdef E\left(\left(\overline{X}-E\left(\overline{X}\right)\right)^2\right)\\ &=E\left(\left(\frac{1}{n}\sum_{i=1}^nX_i -E\left(\frac{1}{n}\sum_{i=1}^nX_i\right)\right)^2\right)\\ &=\frac{1}{n^2}E\left(\sum_{i=1}^n\big(X_i-E\big(X_i\big)\big)\sum_{j=1}^n\big(X_j-E\big(X_j\big)\big)\right)\\ &=\frac{1}{n^2}E\left(\sum_{i=1}^n\sum_{j=1}^n\big(X_i-E\big(X_i\big)\big)\big(X_j-E\big(X_j\big)\big)\right)\\ &=\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^n E\left(\big(X_i-E\big(X_i\big)\big)\big(X_j-E\big(X_j\big)\big)\right)\\ &=\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^n\sigma_{ij}\\ &=\frac{1}{n^2}\left(\sum_{i=1}^n\left(\sigma_i^2+\sum_{j=1\\j\ne i}^n\sigma_{ij}\right)\right)\ \end{align} where $\ \sigma_i\eqdef\sqrt{\sigma_{ii}}\ $ is the standard deviation of $\ X_i\ $. This requires nothing more than that the required variances and covariances exist, and use of the linearity of expectations. No assumption of independence or identity of distribution is necessary.
The expression for $\ \var\big(\overline{X}\big)\ $ does not simplify much when $\ X_i\ $ are merely identically distibuted. If the random variables $\ X_1, X_2,\dots, X_n\ $ are homoscedastic—i.e. they all have the same variance, $\ \sigma^2\ $, say (for which it is sufficient, but not necessary, for them to be identically distributed),—then the expression simplifies a little to $$ \var\big(\overline{X}\big)=\frac{\sigma^2}{n}+\frac{1}{n^2}\sum_{i,j=1\\j\ne i}^n\sigma_{ij}\ . $$ If, in addition, the covariances are all equal, $\ \sigma_{ij}=\rho\ $, say, for all $\ j\ne i\ $, then it simplifies further to $$ \var\big(\overline{X}\big)=\frac{\sigma^2+(n-1)\rho}{n}\ . $$ For this condition to hold, it is not sufficent for $\ X_i\ $ to be identically distributed. It is sufficient, but not necessary, for all the $2$-dimensional marginal distributions of the pairs $\ X_i,X_j\ $ to be identical.
If $\ X_1, X_2,\dots, X_n\ $ are pairwise uncorrelated—i.e. $\ \sigma_{ij}=0\ $ whenever $\ j\ne i\ $(for which it is sufficient, but not necessary, for them to be independent)—then the expression for $\ \var\big(\overline{X}\big)\ $ simplifies to $$ \var\big(\overline{X}\big)=\frac{1}{n^2}\sum_{i=1}^n\sigma_i^2\ , $$ and when $\ X_1, X_2,\dots, X_n\ $ are homoscedastic, with $\ \sigma_i=\sigma\ $ for all $\ i\ $, it simplifies further to $$ \var\big(\overline{X}\big)=\frac{\sigma^2}{n}\ . $$ None of this has much to do with the so-called "law of large numbers", of which, as the above-cited Wikipedia article indicates, there are several different versions. All versions of it, however, have to do with what happens to $\ \overline{X}\ $ or its distribution as $\ n\rightarrow\infty\ $. While the proofs of some of them do rely on the $\ X_i\ $ being independent and identically distributed (such as those of the weak and strong laws of large numbers, for instance), and will certainly make use of some of the above identities, that is about the extent of the relation between them. None of the identities discussed above is called a "law of large numbers".