The mean $\mu$ of a set $\{x_k\}$ with $N$ elements is defined by $$\mu = \frac1N\sum_{k=1}^Nx_k=\frac{\sum_{k=1}^Nx_k}{\sum_{k=1}^N1}$$ or $$\sum_{k=1}^N\mu=\sum_{k=1}^Nx_k$$ These equations work just as well if the $x_k$ are vectors $\vec x_k$.
The standard deviation $\sigma$ of $\{x_k\}$ is defined by $$\sigma=\sqrt{\frac1N\sum_{k=1}^N\left(x_k-\mu\right)^2}=\sqrt{\frac1N\sum_{k=1}^N\left(x_k^2-\mu^2\right)}$$ or $$\sum_{k=1}^N\sigma^2+\sum_{k=1}^N\mu^2=\sum_{k=1}^Nx_k^2$$ These do not work with vectors, because you cannot simply square a vector. You can square the magnitude of a vector, or you can take its dot product with itself (same thing), or (in 3D) its cross product with itself (= 0), or (in 2D) its product with itself as a $\mathbb C$omplex Number or a Perplex or Dual Number. I think the proper generalization to vectors is $$\sigma=\sqrt{\frac1N\sum_{k=1}^N\Vert\vec x_k-\vec \mu\Vert^2}=\sqrt{\frac1N\sum_{k=1}^N\left(\Vert\vec x_k\Vert^2-\Vert\vec\mu\Vert^2\right)}$$
What happens if you use the Complex product instead? A Complex Number squared can be negative, so the square root is, in general, not a Real Number. I will call this the Complex Deviation $\kappa$ : $$\{x_k\}\subset\mathbb C$$ $$\kappa=\pm\sqrt{\frac1N\sum_{k=1}^N\left(x_k-\mu\right)^2}=\pm\sqrt{\frac1N\sum_{k=1}^N\left(x_k^2-\mu^2\right)}$$ I leave the $\pm$ sign ambiguous because the Complex Numbers are not ordered like the Reals are, so there isn't a preferred positive root.
As an example, take the set $S=\{x+yi,x-yi,-x+yi,-x-yi\}$ (where $x$ and $y$ are Real). Then the mean $\mu(S)=0$, and the two deviations are $$\sigma(S)=\sqrt{x^2+y^2}$$ $$\kappa(S)=\pm\sqrt{\frac14\left(\left(x^2-y^2+2xyi\right)+\left(x^2-y^2-2xyi\right)+\left(x^2-y^2-2xyi\right)+\left(x^2-y^2+2xyi\right)\right)}$$ $$\kappa(S)=\pm\sqrt{x^2-y^2}=\pm i\sqrt{y^2-x^2}$$ If $\vert x\vert>\vert y\vert$, then $\kappa$ is Real; if $\vert x\vert<\vert y\vert$, then $\kappa$ is Imaginary; and if $x=\pm y$, then $\kappa=0$.
If the distribution of $\{x_k\}$ has the general shape of an ellipse, then the vector from $0$ to $\kappa$ is parallel to the major axis of the ellipse.
The mean $\mu$ translates with the $x_k$'s, so $(x_k-\mu)$, and thus $\sigma$ and $\kappa$, is translation-invariant. Real scalars can be factored out of the square and the square root, so $\sigma$ and $\kappa$ scale with the $x_k$'s. Rotation does not affect the magnitude of a vector, so $\sigma$ is rotation (and reflection)-invariant. But $\kappa$ rotates with the $x_k$'s. $$\forall c\in\mathbb C,$$ $$\mu(\{x_k+c\})=\mu(\{x_k\})+c$$ $$\kappa(\{x_k+c\})=\kappa(\{x_k\})$$ $$\kappa(\{cx_k\})=\pm c\kappa(\{x_k\})$$ $$\kappa(\{x_k^*\})=\pm\kappa(\{x_k\})^*$$ (The asterisk means conjugate, which is reflection across the Real axis.)
Because of the way that rotation affects $\kappa$, there shouldn't be any preferred direction; $1$, $-1$, and $i$ have the same status, so the nature of Complex Numbers shouldn't really be relevant to $\kappa$. Here is my question : Is there a formula for $\kappa$ in terms of vectors, that works in any number of dimensions?
The question didn't consider what should happen in higher dimensions. If a 2D rectangle or ellipse's $\vec\kappa$ is parallel to the major axis, then what should a 3D box or ellipsoid's $\vec\kappa$ be? A prolate spheroid has one major axis, but an oblate spheroid has two (or infinity, considering the rotation symmetry). We could leave it ambiguous (as in the 2D case, with the $\pm$ sign), so that an oblate spheroid has $\vec\kappa$ pointing in any direction in the plane of rotation. But we still need a general definition of $\vec\kappa$ .
Since asking this question, I discovered Geometric Algebra, which seems to be the generalization of $\mathbb C$ (and Quaternions, and a bunch of other stuff) that I was looking for. The GA form of the defining equation is (with an orthonormal basis vector $e_1$) $$\sum_{k=1}^N (e_1\kappa)^2 = \sum_{k=1}^N (e_1(x_k - \mu))^2$$ (The factor of $e_1$ converts a 2D vector to a Complex Number, which is represented as a scalar plus a bivector.)
Here's a 3D example, analogous to the one from the question. If $a$, $b$, and $c$ are scalars, then $S = \{(ae_1+be_2+ce_3), (ae_1+be_2-ce_3), ..., (-ae_1-be_2-ce_3)\}$ is the 8 vertices of a box, and $\mu = 0$ . In the following, remember that $(e_1)^2 = 1$ , $e_1e_2 = -e_2e_1$ , and $(e_1e_2)^2 = (e_1e_2)(e_1e_2) = e_1(e_2e_1)e_2 = e_1(-e_1e_2)e_2 = -(e_1e_1)(e_2e_2) = -1$ . $$\sum_{k=1}^8 (e_1\kappa)^2 = \sum_{k=1}^8 (e_1 x_k)^2$$ $$8 (e_1\kappa)^2 = (a+be_1e_2+ce_1e_3)^2 + (a+be_1e_2-ce_1e_3)^2 +...+ (-a-be_1e_2-ce_1e_3)^2$$ $$8 (e_1\kappa)^2 = (a^2-b^2-c^2+2abe_1e_2+2ace_1e_3+bc(-e_2e_3-e_3e_2)) + (a^2-b^2-c^2+2abe_1e_2-2ace_1e_3-bc(-e_2e_3-e_3e_2)) +...+ (a^2-b^2-c^2+2(-a)(-b)e_1e_2+2(-a)(-c)e_1e_3+(-b)(-c)(-e_2e_3-e_3e_2))$$ For each $a$ term, there is a $(-a)$ term, which cancels; similarly for $b$ and $c$ . Also, $(e_2e_3+e_3e_2) = 0$ , so only the squares remain: $$8 (e_1\kappa)^2 = 8(a^2-b^2-c^2)$$ $$(e_1\kappa)^2 = (a^2-b^2-c^2)$$ Now there is the question of square roots in GA. If $a^2 \ge b^2+c^2$ , then we can take the scalar root, though there may be others. (For any scalar $s$ and unit vector $u$ , $s^2 = (su)^2$ , so if $s$ is a square root of something, then $su$ is another square root. But if the square root of $(a^2-b^2-c^2)$ is a vector, then $\kappa$ cannot be a vector. The square root must have an even grade.) $$e_1\kappa = \sqrt{a^2-b^2-c^2}$$ $$\kappa = e_1\sqrt{a^2-b^2-c^2}$$ If $a^2 < b^2+c^2$ , then we can take a bivector root. The bivector must contain $e_1$ as a factor, or $\kappa$ could not be a vector. The other factor must be perpendicular to $e_1$ , like $e_2$ . $$(e_1\kappa)^2 = -(b^2+c^2-a^2)$$ $$e_1\kappa = e_1e_2\sqrt{b^2+c^2-a^2}$$ $$\kappa = e_2\sqrt{b^2+c^2-a^2}$$
There is a problem with the above example. Comparing with the 2D case, and reasoning from symmetry, a cube ($a = b = c$) ought to have $\kappa = 0$ (or, maybe, a vector with arbitrary direction). Instead we get $\kappa = e_2 a$ , or any vector perpendicular to $e_1$ with length $a$ . The symmetry is broken by $e_1$ in the defining equation. This did not happen in 2D.
It seems to be a peculiarity of 2 dimensions that the factor of $e_1$ doesn't matter; it could have been $e_2$ or anything else, and $\kappa$ would be the same. I think this is related to the fact that rotations are commutative in 2D, but not in 3D or higher.
What happens when we replace $e_1$ ? Any vector is $e_1$ rotated and scaled. In Geometric Algebra, a rotor is a Real Number $a$ plus a simple bivector $B$ . ("Simple" means it is a product of vectors, also called a "blade". The bivector $(e_1e_2 + e_3e_4)$ is not simple.) If the rotation angle is $\theta$, then $\cos\frac\theta 2 = a$ . For a pure rotation, $a$ and $B$ should be normalized so that $(a^2-B^2) = \lVert a \rVert^2+\lVert B \rVert^2 = 1$ . If it is not $1$, it scales the input by that factor as well as rotating it. For a vector $x$, the rotation is $$x \mapsto (a-B)x(a+B)$$ $$= R^{-1} xR$$
In 2D, every vector is parallel to every bivector, which implies that their wedge product is zero, and their geometric product is anticommutative. For example, if $x = 5e_1$ and $B = e_1e_2$ , $$xB = 5e_1e_1e_2 = 5e_2$$ $$Bx = (e_1e_2)5e_1 = (-e_2e_1)5e_1 = -5e_2$$ $$xB = -Bx$$ In contrast, a perpendicular vector's product is commutative: $$e_3(e_1e_2) = (e_3e_1)e_2 = (-e_1e_3)e_2 = e_1(-e_3e_2) = e_1(e_2e_3) = (e_1e_2)e_3$$
So, for a 2D vector, $$x(a+B) = xa+xB = ax-Bx = (a-B)x$$ $$xR = R^{-1}x$$ A 3D (or higher) vector can be broken into components parallel and perpendicular to $B$ : $$x = x_\parallel + x_\perp$$ $$x_\parallel R = R^{-1}x_\parallel$$ $$x_\perp R = Rx_\perp$$
Returning to the $\kappa$ equation, and rotating $e_1$ , $$\sum_k (R^{-1}e_1 R\kappa)^2 = \sum_k (R^{-1}e_1 Rx_k)^2$$ $$\sum_k (R^{-1}e_1 R\kappa R^{-1}e_1 R\kappa) = \sum_k (R^{-1}e_1 Rx_k R^{-1}e_1 Rx_k)$$ $e_1$ is in the plane of rotation, so $R$ can be moved across it: $$\sum_k (R^{-2}e_1\kappa e_1 R^2\kappa) = \sum_k (R^{-2}e_1 x_k e_1 R^2 x_k)$$ and $R$ can be moved across $\kappa$ and $x$, assuming they're also in the plane. (3D stuff stops here.) $$\sum_k (R^{-2}e_1\kappa e_1\kappa R^{-2}) = \sum_k (R^{-2}e_1 x_k e_1 x_k R^{-2})$$ $$R^{-2}\sum_k (e_1\kappa e_1\kappa) R^{-2} = R^{-2}\sum_k (e_1 x_k e_1 x_k) R^{-2}$$ Multiplying on the left and right by $R^2$, $$\sum_k (e_1\kappa e_1\kappa) = \sum_k (e_1 x_k e_1 x_k)$$ $$\sum_k (e_1\kappa)^2 = \sum_k (e_1 x_k)^2$$ Thus any vector yields the same equation as $e_1$ . But this depends on everything being in the plane of rotation. So it seems that the Complex Deviation does not generalize.