As Euclidean spaces are Hibert spaces, they're of "negative type". Hence, for any $n\in\mathbb Z^+$, given two sets of $n$ points $\{x_i\}_{i=1}^n\subset\mathbb R^m$ and $\{y_j\}_{j=1}^n\subset\mathbb R^m$ (with $m\in\mathbb Z^+$), the following inequality holds:
$$2\sum_{i,j=1}^n\Vert x_i-y_j \Vert\geq\sum_{i,j=1}^n\left(\Vert x_i-x_j \Vert+\Vert y_i-y_j \Vert\right)$$
I've been doing multiple unsuccessful attempts to:
- Find an elegant elementary proof of the inequality when $m=1$ and $\Vert\cdot\Vert$ is the absolute value.
And I wanted it to be "elegant" so that I could extend it to:
- Prove the inequality when $\Vert\cdot\Vert$ is the 2-norm, for $m>1$.
(Which I haven't been able to do by any other means, either)
So my question is: How can I prove the inequality without using the "negativeness" of $\mathbb R^m$?
By this I mean that I want a direct proof and not simply stating "$\mathbb R^m$ admits an embedding into a Hilbert space (as it is one) and the proof can be concluded by applying I. J. Schoenberg's 1938 theorem".
In case anyone is curious about what I mean by negative type, see Lev B. Klebanov's $\mathfrak N$-Distances and their Applications. But basically the idea is that the distance acts nearly as a negative semidefinite kernel. This is especially important in probability theory because it's what makes $\mathcal E$-statistics work.
For reference: the Euclidean metric being of negative type means that for any $z_1,\dots, z_{k} \in\mathbb{R}^m$ and any real scalars $b_1,\dots, b_k$ such that $\sum b_i =0$ we have $$\sum_{i, j} b_i b_j \|z_i-z_j\|\le 0 \tag1$$ The inequality you want to prove is equivalent to (1), so it cannot have any substantially simpler proof than (1) itself. Indeed, in (1) it suffices to consider rational $b_i$, by the density of rationals. Hence, it suffices to consider integer $b_i$, by multiplying everything by the common denominator of $b_i$s. Hence it suffices to consider $b_i\in \{\pm 1\}$, by repeating each point $|b_i|$ times. Then we can rename $z_i$ as "x" if $b_i=1$ and as "y" otherwise, arriving at the inequality you stated.
A proof of (1) for $m=1$: order the numbers as $z_1\le \dots\le z_k$ and focus on the contribution of some gap $[z_i, z_{i+1}]$ to the sum in (1). It contributes $(z_{i+1}-z_i)$ times $$ \sum_{j\le i, \ \ell > i} b_j b_{\ell} = \sum_{j\le i} b_j \sum_{\ell > i} b_{\ell} = -\left(\sum_{j\le i} b_j\right)^2 \le 0 $$ Since the left hand side of (1) is made of such nonpositive quantities, it is nonpositive.
The extention to $\mathbb{R}^m$ is based on the fact that the length of a line segment $S$ can be computed, up to some constant $C_m$, by projecting $S$ onto an arbitrary line $L$ through the origin and integrating the length of such projections over all $L$. So, project all points onto $L$ and apply the inequality for $m=1$ to the projections; then integrate over all $L$.