Extending a simple 1-dimensional observation to 2-dimensions.

283 Views Asked by At

Consider the following (easy to prove) statemnt.

Statement 1. There is an absolute constant $c_1$ such that whenever $n\geq 1$ and $x_1, \ldots, x_n$ are real numbers such that $|x_i| \leq 1$ for all $i$ and $\sum_{i=1}^n x_i = 0$, one can find a permutation $\sigma$ of $\{1, \ldots, n\}$ such that each of the numbers in the sequence $$ x_{\sigma(1)},\ x_{\sigma(1)} + x_{\sigma(2)},\ x_{\sigma(1)} + x_{\sigma(2)} + x_{\sigma(3)}, \ldots, x_{\sigma(1)} + \cdots + x_{\sigma(n)} $$ has magnitude at most $c_1$.

In fact, one can easily show that $c_1 = 1$ works. Pick $x_1$ and set $\sigma(1) = 1$. If $x_1>0$, find a negative $x_i$ and set $\sigma(2) = i$ and continue greedily.

The following is the two dimensional version of the above statement, but it is quite unclear to me if it is true of not.

Statement 2. There is an absolute constant $c_2$ such that whenever $n\geq 1$ and $x_1, \ldots, x_n$ and $y_1, \ldots, y_n$ are two sequences of real numbers such that $|x_i|, |y_i|\leq 1$ for all $i$, and $\sum_{i=1}^n x_i = \sum_{i=1}^n y_i = 0$, one can find a permutation $\sigma$ of $\{1, \ldots, n\}$ such that for each $1\leq k\leq n$, the magnitude of both of $$ x_{\sigma(1)} + x_{\sigma(2)} + x_{\sigma(3)} + \cdots + x_{\sigma(k)}, \quad y_{\sigma(1)} + y_{\sigma(2)} + y_{\sigma(3)} + \cdots + y_{\sigma(k)} $$ is at most $c_2$.

1

There are 1 best solutions below

3
On BEST ANSWER

The answer is yes; this is a special case of the polygonal confinement theorem in the case of two dimensions, and we may take $c_2=\sqrt{10}$. Here I give the proof from Peter Rosenthal's expository article, for your Statement 2 (assuming Statement 1); the general proof inducts on dimension. Given vectors $v_i=(x_i,y_i)\in\textbf R^2$ for $1\le i\le n$ satisfying $|x_i|,|y_i|\le1$ and $\sum_{i=1}^nv_i=\textbf0$, we see that $\|v_i\|\le\sqrt2$, and we may reason as follows. There are finitely many sums $\sum_{i\in S\subset\{1,\dots,n\}}v_i$; choose $S$ maximizing $\|\sum_{i\in S}v_i\|$ and write $L=\sum_{i\in S}v_i$. For convenience, we write $$v_1+\dots+v_n =u_1+\dots+u_s+w_1+\dots+w_t,$$ where the $u_j$ are the $v_i$ that belong to $S$, and the $w_j$ are the remaining vectors $v_i$; thus $$L=u_1+\dots+u_s=-w_1-\dots-w_t.\tag1$$

We first prove that $u_i\cdot L\ge0$ for all $i$. Indeed, if some $u_i\cdot L<0$, then $$\|L-u_i\|\ge(L-u_i)\cdot(L/\|L\|)=\|L\|-(u_i\cdot L)/\|L\|>\|L\|,$$ and so $L-u_i$ is a sum of vectors $v_i$ with length greater than $L$, contradicting the definition of $L$. Similarly, we can show that $w_i\cdot L\le 0$ for all $i$. enter image description here We consider the subspace $L^\perp:=\{v\in\textbf R^2:v\cdot L=0\}$, and write $v'$ for the component of a vector $v\in\textbf R^2$ in $L^\perp$; that is, $v':=v-((v\cdot L)/\|L\|^2)L$. By (1), we have $$u_1'+\dots+u_s'=w_1'+\dots+w_t'=0.$$ By Statement 1 from your question (scaled to take vectors of length at most $\sqrt2$, since $\|v_i\|\le\sqrt2$ by hypothesis), there exists a permutation $Q$ of $\{1,\dots,s\}$ such that $\|\sum_{i=1}^j u'_{Q(i)}\|\le\sqrt2$ for $1\le j\le s$, as well as a permutation $R$ of $\{1,\dots,t\}$ such that $\|\sum_{i=1}^jw_{R(i)}'\|\le\sqrt2$ for $1\le j\le t$. The idea now is to alternate between the $u$ and $w$ vectors greedily, taking vectors from $u$ until the sign flips, then taking vectors from $w$ until the sign flips again, repeating until we have used up all the vectors. Since $u_i\cdot L\ge0$ and $w_i\cdot L\le0$, we may choose minimal $r_1$ such that $$u_{Q(1)}\cdot L+\sum_{i=1}^{r_1}(w_{R(i)}\cdot L)\le0.$$ Then, we may choose minimal $s_1$ such that $$u_{Q(1)}\cdot L+\sum_{i=1}^{r_1}(w_{R(i)}\cdot L) +\sum_{i=2}^{s_1}(u_{Q(i)}\cdot L)\ge0.$$ Then, we may choose minimal $r_2$ such that $$u_{Q(1)}\cdot L+\sum_{i=1}^{r_1}(w_{R(i)}\cdot L) +\sum_{i=2}^{s_1}(u_{Q(i)}\cdot L) +\sum_{i=r_1+1}^{r_2}(w_{R(i)}\cdot L)\le0.$$ Continuing in this fashion, we obtain the following permutation of $v_i$: $$(u_{Q(1)},w_{R(1)},\dots,w_{R(r_1)}, u_{Q(2)},\dots,u_{Q(s_1)}, w_{R(r_1+1)},\dots,w_{R(r_2)},\dots).$$ All the components along $L$ of each partial sum have norm at most $\sqrt2$ due to the greediness of our construction. The components of the partial sums of $u$ and $w$ along $L^\perp$ both have norms at most $\sqrt2$ each by Statement 1. Thus the norm of each partial sum is at most $$c_2=\sqrt{(2\sqrt2)^2+\sqrt2^2}=\sqrt{10}\approx3.16227766017,$$ as needed.