Does every connected compact metric space have a unique always attainable average distance?

360 Views Asked by At

Problem statement

Let $(X,d)$ be a connected compact metric space. Then does there always exist a unique value $\alpha\geq0$ with the following property?

For every $x_1,\ldots,x_k\in X$ there exists $x\in X$ such that the average distance from $x$ to $x_i$ for $i=1,\ldots,k$ is exactly $\alpha$.

Proof of existence

The following proves such a value always exists, so the main question is whether it is unique.

For $v\in X^n$, define \begin{align*} f_v(x)&:=\frac{d(x,v_1)+\ldots+d(x,v_n)}n, \\a(v)&:=\min_{x\in X}f_v(x), \\b(v)&:=\max_{x\in X}f_v(x). \end{align*} Because $X$ is compact and $f_v$ is continuous, the image of $f_v$ is also compact and thus indeed has a minimum and a maximum.

Let $v\in X^n$ and $w\in X^m$. We have $$a(w)\leq\frac{f_w(v_1)+\ldots+f_w(v_n)}n=\frac{f_v(w_1)+\ldots+f_v(w_m)}m\leq b(v).$$ It follows that there exists a supremum $\alpha$ of $a$ and an infimum $\beta$ of $b$, and that we have $\alpha\leq\beta$.

Let $v\in X^n$, so we have $a(v)\leq\alpha\leq\beta\leq b(v)$. Since $X$ is connected, the image of $f_v$ is connected, so since it contains $a(v)$ and $b(v)$, it also contains $[a(v),b(v)]$. It follows that there exists $x\in X$ such that $f_v(x)=\alpha$.

Note that uniqueness is thus equivalent with stating that $\alpha=\beta$ in the above proof.

Looking for counterexamples

The most straight forward connected compact metric spaces that come to mind are $[0,1]^n$. Let $v(t):=(t,\ldots,t)$. We have $$a(v(0),v(1))=b(v(1/2))=\sqrt{n}/2=\alpha=\beta.$$

Consider a regular $n$-gon $P_1,\ldots,P_n$ with center $O$. We have $$a(P_1,\ldots,P_n)=b(O)=|P_1O|=\alpha=\beta.$$

If we have the boundary of a unit square $ABCD$, let $E,F,G,H$ be the centers of the edges. We have $$a(A,B,C,D)=b(E,F,G,H)=\frac14(1+\sqrt5)=\alpha=\beta.$$

If we have the unit circle, let $v_n$ denote $n$ evenly spaced points. We have $$\lim_{n\to\infty}a(v_n)=\lim_{n\to\infty}b(v_n)=\frac1{2\pi}\int_0^{2\pi}|1-e^{i\theta}|\ \mbox{d}\theta=\frac4\pi=\alpha=\beta,$$ so we can essentially approximate any measure, in this case giving us the average distance between two points on a circle.

Consider an arbitrary acute triangle $ABC$ with circumcenter $O$. Note that $b(O)=|AO|$. Since we can approximate any measure, we can introduce weights $w_A,w_B,w_C$. Fixing $w_A$, we find unique values of $w_B,w_C$ such that $f_{w_AA,w_BB,w_CC}(P)$ has gradient $0$ at $P=O$, which gives $$a(w_AA,w_BB,w_CC)=b(O)=|AO|=\alpha=\beta.$$

So it seems like $\alpha=\beta$ always holds, but I can not come up with a proof. I even tried some really ugly non-symmetric non-convex sets, and some metric spaces which are not subsets of $\mathbb{R}^n$, and while it becomes more work each time, I always end up proving $\alpha=\beta$. All proofs rely on constructing measures $\mu,\nu$ such that $a(\mu)=b(\nu)$, but I can not figure out the bigger picture of what these measures are in general. I would not even know how to define measures in arbitrary metric spaces to begin with.

Edit: A measure theory approach

We can phrase the question in terms of probability measures as follows.

Let $(X,d)$ be a connected compact metric space with Borel $\sigma$-algebra $B(X)$. Then does there always exist a unique value $\alpha\geq0$ with the following property?

For every probability measure $P$ on $(X,B(X))$, there exists $x\in X$, such that $\mathbb{E}_{y\sim P}(d(x,y))=\alpha$.

The existence proof is completely analogous if we use $f_P(x):=\mathbb{E}_{y\sim P}(d(x,y))$, for $P$ a probability measure on $(X,B(X))$, because $f_P$ is still continuous. Indeed, $f_P$ has Lipschitz constant $1$.

Equivalence with the finite average approach follows from discrete probability measures are dense in separable metric spaces. Indeed, since $X$ is compact, it is separable.

Optimal probability measures

The generalization to arbitrary probability measures allows us to do the following.

Since the set of probability measures $P$ on $(X,B(X))$ is sequentially compact, see Theorem 10.2, there exist probabability measures $P_a$ and $P_b$ on $(X,B(X))$ such that $a(P_a)=\alpha$ and $b(P_b)=\beta$.

Claim: We have $\alpha=\beta$ if, and only if, we have $f_{P_a}(x)=\alpha$ for all $x\in\mbox{supp}(P_b)$, and vice versa.

Proof: For the left implication, since $X$ is strongly Lindelöf, we have $$\alpha=\mathbb{E}_{x\sim P_b}(f_{P_a}(x))=\mathbb{E}_{x\sim P_a}(f_{P_b}(x))=\beta.$$ For the right implication, note that, if $f_{P_a}(x)>\alpha$ for some $x\in\mbox{supp}(P_b)$, then $$\alpha<\mathbb{E}_{x\sim P_b}(f_{P_a}(x))=\mathbb{E}_{x\sim P_a}(f_{P_b}(x))\leq\beta.$$

2

There are 2 best solutions below

5
On BEST ANSWER

It seems that indeed $\alpha=\beta$.

First let us reformulate the problem making also the optimal point $x$ a measure. By definition, $$ a(\mu)=\inf_{x\in X}\int_X d(x,y)\,\mathrm{d}\mu(y). $$ Let us introduce $$ A(\mu):=\inf_{\nu\in\mathcal{P}(X)}\int_X\int_X d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}\nu(x). $$ Claim. $A(\mu)=a(\mu)$ for every $\mu\in\mathcal{P}(X)$.

Proof. On one hand it is clear that $A(\mu)\le a(\mu)$ by choosing $\nu=\delta_{x_0}$, with $x_0$ optimal point for $a(\mu)$.

On the other hand, let $\mu\in\mathcal{P}(X)$ be fixed and let $\nu$ be optimal for $A(\mu)$. First, for every $E\subseteq X$ we must have $$ \int_X\int_X d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}(\nu\llcorner E)(x)=A(\mu)\nu(E), $$ otherwise either $\frac{1}{\nu(E)}\nu\llcorner E$ or $\frac{1}{\nu(E^c)}\nu\llcorner E^c$ would be a better competitor (here $\nu\llcorner E\,(B):=\nu(E\cap B)$ denotes the restriction of $\nu$ to $E$). It follows that for every $E\subseteq X$ with $\nu(E)> 0$ $$ \frac{1}{\nu(E)}\int_{x\in E} \int_{y\in X}d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}\nu(x)=A(\mu) $$ and by standard arguments it follows that the function $x\mapsto \int d(x,y)\,\mathrm{d}\mu(y)$ is constant $\nu$-a.e., equal to $A(\mu)$. Then, for $\nu$-a.e. point $x_0$, $\tilde\nu:=\delta_{x_0}$ would still be an optimal competitor for $A(\mu)$, and therefore $x_0$ would be optimal for $a(\mu)$ (with the same value). $\blacksquare$

With a similar argument, $b(\mu)$ equals $$ B(\mu):=\sup_{\nu\in\mathcal{P}(X)}\int_X \int_X d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}\nu(x). $$

To prove $\alpha=\beta$ we are thus reduced to prove (I exchanged $\mu$ and $\nu$ by symmetry) $$ \sup_{\mu\in\mathcal{P}(X)}\inf_{\nu\in\mathcal{P}(X)}\int_X\int_X d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}\nu(x)=\inf_{\nu\in\mathcal{P}(X)}\sup_{\mu\in\mathcal{P}(X)}\int_X\int_X d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}\nu(x). $$ This equality follows from a version of the minimax theorem *, after observing that the function $$ \mathcal{P}(X)\times\mathcal{P}(X)\ni(\mu,\nu)\mapsto \int_X\int_X d(x,y)\,\mathrm{d}\mu(y)\,\mathrm{d}\nu(x) $$ is bilinear (and thus convex in one variable and concave in the other).

*For instance, look at: Ky Fan, Minimax Theorems. Proceedings of the National Academy of Sciences of the United States of America Vol. 39, No. 1 (Jan. 15, 1953), pp. 42-47. There is probably a simpler reference but this is the first I could find that works for infinite-dimensional spaces.

1
On

Nice problem! Here are some thoughts which are too long for a comment.

One natural approach is to try to construct measures $\mu$ such that $b(\mu)-a(\mu)$ is arbitrarily small, as suggested by Aphelli in the comments. Unfortunately this is not always possible; consider for example the following tripod graph, endowed with the graph metric such that each edge has length $1$.

Let $\mu$ be a probability measure on this space. Without loss of generality, we can assume that the segment $AO$ (with $A$ included but $O$ excluded) has measure $\le \frac{1}{3}$. Then $f_\mu(A)\ge f_\mu(O)+\frac{1}{3}$, so $b(\mu)-a(\mu)\ge \frac{1}{3}$.

This is not a counterexample to the OP's question, however, since we have $a(A,B,C)=b(O)=1$.