Nice inequality: $\sum\limits_{i=1}^{2m+1}x^2_i\le2m$ if $\sum\limits_{i=1}^{2m+1}x_i=0$ and $|x_i|\le 1$

104 Views Asked by At

Let $x_1,x_2,\cdots,x_{2m+1}\in \mathbb{R}$ such that $|x_{i}|\le 1$ and $x_1+x_2+\cdots+x_{2m+1}=0$. Show that $$\sum_{i=1}^{2m+1}x^2_i\le 2m.$$

My try: Let $x_{i}=\sin{y_{i}}$, then $\sum\limits_{i=1}^{2m+1}\sin y_i=0$, and show that $$\sum_{i=1}^{2m+1}\sin^2 y_i\le 2m.$$ It seems neither Cauchy-Schwarz inequality nor AM-GM inequality can be used.

And it is clear that $$\sum_{i=1}^{2m+1}x^2_{i}\le 1+1+\cdots+1=2m+1.$$ But I cannot prove for $2m$.

4

There are 4 best solutions below

0
On BEST ANSWER

This answer is an excerpt from my answer to another question.


Without loss of generality, suppose $x_1, \cdots, x_k \geqslant 0$ and $x_{k + 1}, \cdots, x_{2n + 1} \leqslant 0$.

If $\sum\limits_{j = 1}^k x_j^2 \leqslant k - 1$, then$$ \sum_{j = 1}^{2n + 1} x_j^2 = \sum_{j = 1}^k x_j^2 + \sum_{j = k + 1}^{2n + 1} x_j^2 \leqslant (k - 1) + (2n - k + 1) = 2n. $$ If $\sum\limits_{j = k + 1}^{2n + 1} x_j^2 \leqslant 2n - k$, analogously there is $\sum\limits_{j = 1}^{2n + 1} x_j^2 \leqslant 2n$.

Now suppose$$ \sum_{j = 1}^k x_j^2 > k - 1, \quad \sum_{j = k + 1}^{2n + 1} x_j^2 > 2n - k. \tag{1} $$ Because$$ k - 1 < \sum_{j = 1}^k x_j^2 \leqslant \sum_{j = 1}^k x_j = -\left( \sum_{j = k + 1}^{2n + 1} x_j \right) \leqslant 2n - k, $$ then $2k < 2n + 1$, which implies $k \leqslant n$. Thus$$ n \leqslant 2n - k < \sum_{j = k + 1}^{2n + 1} x_j^2 \leqslant -\left( \sum_{j = k + 1}^{2n + 1} x_j \right) = \sum_{j = 1}^k x_j \leqslant k \leqslant n, $$ a contradiction. Therefore, (1) cannot hold.

3
On

First, note the function we are trying to maximise $i.e. \sum x_i^2$, is continuous in its variables and the variables are defined over a compact set. Hence it attains a maximum in the set.

Now suppose the maximum is attained at some point $(x_1, x_2, ...x_{2m+1})$. Then we cannot have two distinct indices $i,j$ s.t. $|x_i|, |x_j|$ are both $< 1$. On the contrary if this was possible, then replacing those with $x_i + \epsilon, x_j - \epsilon$ respectively for suitable $\epsilon$ would lead to a higher value as $t \mapsto t^2$ is convex.

It remains to show that the only possibility satisfying the constraints is when $m$ variables are $1$, another $m$ is $-1$ and the remaining variable is $0$ at maximum...

3
On

You have an equation that has to maximized with a second condition that has to be fulfilled. This calls for a Lagrange multiplier.

We look for the maximum of $\sum_i x_i^2$ with $\sum_i x_i = 0$. First we can define: $$\Lambda = \sum_{i=1}^{2m+1} x_i^2 + \lambda\sum_{i=1}^{2m+1} x_i$$ Note that adding the second summand is effectively adding zero. Now we look for a maximum of all variables: $$\vec{0}=\left(\frac{\partial \Lambda}{\partial x_1}, \frac{\partial \Lambda}{\partial x_2}, ..., \frac{\partial \Lambda}{\partial x_{2m+1}}, \frac{\partial \Lambda}{\partial \lambda}\right)\\ =\left(2x_1+\lambda, 2x_2+\lambda, ..., 2x_{2m+1}+\lambda, 0\right)$$ where it was used again that $\sum_i x_i=0$. The only solution to this is that all $x_i$ are equal, which immediately implies that all $x_i=0$ because their sum must be $0$. So the only extremum we found is a minimum, which means that the maximum lies at the boundary of the allowed region, which means that at least one $x_i$ must be equal to $1$ or $-1$. Since the problem is symmetric in $i$, we can assume that $x_1=1$. (The argument for $x_1=-1$ is analoguous.) This leads to the reduced formula: $$\Lambda_1 = 1+\sum_{i=2}^{2m+1} x_i^2 + \lambda\left(1+\sum_{i=2}^{2m+1}x_i\right)$$ Note that the last term again is zero and thus can be added freely. We look for the extremums and get: $$\vec{0}=\left(\frac{\partial \Lambda_1}{\partial x_2}, ..., \frac{\partial \Lambda_1}{\partial x_{2m+1}}, \frac{\partial \Lambda_1}{\partial \lambda}\right)\\ =\left(2x_2+\lambda, ..., 2x_{2m+1}+\lambda, 0\right)$$ This has only one solution: $$x_i=-\frac{\lambda}{2}, i\ge 2$$ Obviously, this is again a minimum, as can be seen by taking the second derivative, which is $2>0$ in every component. Now, the same argument as before applies: The maximum must lie on the boundary. We can repeat this step, finding that all variables except $x_{2m+1}$ must be $-1$ or $1$. Combinations that would make $\sum_i x_i=0$ impossible can be ignored. Finally, we arrive at the last step: $$\Lambda_{2m} = 2m+x_{2m+1}^2 + \lambda\left(x_{2m+1}-c\right)$$ where c is the sum of all other variables except $x_{2m+1}$. But since this is an even number of variables all being either $1$ or $-1$, the sum must be an even number. The only allowed even number that makes it possible for the sum to be zero by adding $x_{2m+1}$ is zero, and from this we know the only value possible for $x_{2m+1}$ is zero as well. So we have got the maximum value for the sum of the squares: $$\sum_{i=1}^{2m+1}x_i^2 \le m\cdot|\pm1|^2+0 = 2m$$

3
On

Found a different, easier solution.

Assume that all $x_i$ fulfil the given equations. If all except one have an absolute value of 1, the last one must be 0 and we are done.

Now assume that two or more values have an absolute value of less than 1. Take the one with the highest absolute value and add the difference $\delta$ to the nearest maximum/minimum value. Subtract $\delta$ from another value that is not 1 or -1 to compensate for the addition.

For example, if $x_j=0.9$ and $x_k=-0.95$, add $\delta=-0.05$ to $x_k$ and subtract the same value from $x_j$. This is always possible since all other values are farer away from the boundaries than the one with the highest absolute value.

Assuming that this always increases the sum $\sum_i x_i^2$, we can repeat this procedure until only one number is left that has not absolute value 1. This value must be 0, and the maximum sum of the squares is $2m$.

It remains to show that the sum of squares increases with each operation. Let the two values be $x_j$ and $x_k$, with $|x_j|\ge |x_k|$. I will only show this for $x_j\ge 0$ and thus $\delta>0$, it works analogouos for $x_j\le 0$. We have: $$\left(x_j+\delta\right)^2+\left(x_k-\delta\right)^2 = x_j^2+x_k^2+2\delta(x_j-x_k+\delta)> x_j^2+x_k^2$$ The last inequality follows from $\delta>0$ and $x_j \ge x_k$.