In Knuth's Volume 2 Seminumerical Algorithms, chapter 4.2.2 Accuracy of Floating Point Arithmetic, there's a statement:
Novice programmers who calculate the standard deviation of some observations by using the textbook formula $$\sigma=\sqrt{\left.\left(n\sum_{1\le k\le n}x^2_k-\left(\sum_{1\le k\le n}x_k\right)^2\right)\right/n(n-1)}\tag{14}$$ often find themselves taking the square root of a negative number!
Can this be demonstrated with some small dataset (some tens of points) and the common IEEE $\mathrm{binary64}$ arithmetic, without resorting to values differing by $\sim10$ orders of magnitude?
If you just want to have an actual example, here it is: Set $n=3$, $x_1=x_2=x_3=1 + 2^{-52}.$ Then you have (the first values are the hexadecimal binary64 values)
Admittedly the $x_i$ are somewhat artificial. Another example with equal $x_i$ is $n=6, x_i=0.3$
And here the expanded second example with non-zero variance (the now $x_1$ is the smallest FP number greater than 0.3)
Interestingly here the negative difference occurs already for $n=2$