According to this wikipedia article on Variance
The variance of a set of n equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all points from each other:
$$ Var(X)=\frac{1}{n^2} \sum_{i=1}^n\sum_{j=1}^n\frac{1}{2}(x_i-x_j)^2 =\frac{1}{n^2} \sum_{i}\sum_{j>i}(x_i-x_j)^2 $$
How is this can be derived from the more well known formula, $\sum_{i=1}^n\frac{(x_i-\mu)^2}{n}$ or otherwise?
Here is a short proof using random variables: Let $X, Y$ be i.i.d. random variables having finite variance. Since $\mathsf{E}[X-Y]=0$, it follows that
$$ \mathsf{E}[(X-Y)^2]=\mathsf{Var}(X-Y)=\mathsf{Var}(X)+\mathsf{Var}(Y)=2\mathsf{Var}(X).$$
When $X$ has discrete distribution over the finite set $\{x_1, \cdots, x_n)$ with $\mathsf{P}[X = x_i]=p_i$, the above computation can be also computed using sums. Let $\mu = \sum_{i=1}^{n} x_i p_i$ be the mean. Then
\begin{align*} \sum_{i,j=1}^{n} (x_i - x_j)^2 p_i p_j &= \sum_{i,j=1}^{n} ((x_i - \mu) - (x_j - \mu))^2 p_i p_j \\ &= \sum_{i,j=1}^{n} \left[ (x_i - \mu)^2 - 2(x_i - \mu)(x_j - \mu) + (x_j - \mu)^2 \right] p_i p_j \\ &= \sigma^2 - 2(\mu-\mu)(\mu-\mu) + \sigma^2 \\ &= 2\sigma^2. \end{align*}
The last expression is well-known to be $2$ times the variance. Of course, if we further assume that $p_i$ are are equal (with the value $p_i = \frac{1}{n}$, of course), then this boils down to OP's case.