On the upper bound of $\sum_{i=1}^{n}x^m_{i}$ subject to the conditions $\sum_{i=1}^{n}x_{i}=0$ and $\sum_{i=1}^{n}x^2_{i}=n$

294 Views Asked by At

Let $x_{1},x_{2},\cdots,x_{n}$ are real numbers, and such $$\begin{cases} x_{1}+x_{2}+\cdots+x_{n}=0\\ x^2_{1}+x^2_{2}+\cdots+x^2_{n}=n \end{cases}$$ Let $\alpha_{m}=\displaystyle\dfrac{1}{n}\sum_{i=1}^{n}x^m_{i}$

This problem is from Mitrinovic D.S Analytic inequalities (Springer 1970) Page 347.

M.LAKSHMANAMURTI He proved that $$\alpha_{m}\le\dfrac{(n-1)^{m-1}+(-1)^m}{n(n-1)^{(m/2)-1}}$$

I have find the proof somedays, at last I can't find the 1950's paper,can you someone find the full paper? or post this question answer here? Thanks

enter image description here

2

There are 2 best solutions below

3
On

The idea is roughly that we want $x_n$ to be as big as possible because $a^m+b^m \leq (a+b)^m$ for all $m\geq1$ and $a,b \geq 0$.

Which means the upper bound is when $x_n$ is as big as possible and the other $x_i \approx 0$.

From the second condition we know that $x_n^2 \leq n$ so $x_n \approx \sqrt{n}$

From the first condition we need that $\sum \limits_{i=1}^{n-1} x_i \approx -\sqrt{n}$ and they are as small as possible so taking $x_i = -\frac{1}{\sqrt{n-1}}$ for all $1 \leq i \leq n-1$ and $x_n = \sqrt{n-1}$

These numbers fulfill the first and second condition, and we just need to evaluate

$\frac{1}{n}\sum \limits_{i=1}^{n} x_i^m = \frac{1}{n} \sqrt{(n-1)^m}+(-1)^m \frac{n-1}{n} (\frac{1}{\sqrt{n-1}})^m = \frac{1}{n}(n-1)^{\frac{m}{2}}+\frac{(-1)^m}{n (n-1)^{m/2-1}} = \frac{(n-1)^{m-1}+(-1)^m}{n(n-1)^{m/2-1}}$.

What is left is to prove that $x_n \not > \sqrt{n-1}$ even by a little amount(i will try to solve it).

3
On

Following Ahmads answer, we need to show that $x_i = -\frac{1}{\sqrt{n-1}}$ for all $1 \leq i \leq n-1$ and $x_n = \sqrt{n-1}$ indeed produces the maximum value of $\sum_{i=1}^n x_i^m$, which is $\alpha_m^* = \dfrac{(n-1)^{m-1}+(-1)^m}{(n-1)^{(m/2)-1}}$. Actually, this paper (Some constrained optimization problems in elementary statistics, by Wolfgang Stadje (2002)) cites that M. LAKSHMANAMURTI in the mentioned 1950 article is using exactly these values to prove the claim.

Let us change the variables to $x_i = -\frac{1}{\sqrt{n-1}} + y_i$ for all $1 \leq i \leq n-1$ and $x_n = \sqrt{n-1} + y_n$. Then we have to observe $\sum_{i=1}^n y_i = 0$. Further, demanding $n = \sum_{i=1}^n x_i^2$ gives, when using $\sum_{i=1}^n y_i = 0$, the condition $y_n = - \frac{\sqrt{n-1}}{2 n } \sum_{i=1}^n y_i^2$. So it is only possible to use negative $y_n$.

Expanding $\sum_{i=1}^n x_i^m$ to first order in $y_i$, we obtain (again using $\sum_{i=1}^n y_i = 0$)

$$ \sum_{i=1}^n x_i^m = \alpha_m^* + m (n-1)^{-\frac{m-1}{2}} [(n-1)^{m-1} - (-1)^{m-1}] y_n $$

Since $(n-1)^{m-1} - (-1)^{m-1} \ge 0$, and since $y_n <0$, we have that the value of $\sum_{i=1}^n x_i^m$ will actually not increase in the vicinity of the original values for $x_i$, no matter what the changes $y_i$ are. This shows that the original values for $x_i$ indeed locally produce the highest value of $\alpha_m$.