Finding the largest first entry of points in a subset of $\mathbb{R}^n$

87 Views Asked by At

Let $A \subset \mathbb{R}^n$.

Denote $A:= \{(x_1,x_2,\cdots,x_n) \in \mathbb{R}^n : x_1+x_2+\cdots+x_n=0, x_1^2+\cdots+x_n^2=1\}$.

Find $\max \{x_1: (x_1,x_2,\cdots,x_n) \in A\}$

My solution:

I have to solve the problem using Lagrange multiplier :

Denote $g(x)=x_1^2+\cdots+x_n^2-1,f(x)=x_1=-x_2-\cdots -x_n,x\in \mathbb{R}^n.$

I have solved $D f = \lambda D g$ and find out that $x_1=0.$

Edit-My try :

$f_{x_1}=-\lambda g_{x_1} \implies 0=-2\lambda x_1 \implies x_1=0$

$f_{x_2}=-\lambda g_{x_2} \implies -1-2\lambda x_2=0$

$\vdots$

$f_{x_i}=-\lambda g_{x_i} \implies -1-2\lambda x_i=0$

Therefore $\lambda = \frac{-1}{2x_i}$ and $x_2=-x_3$ and so on...

I think my solution is not correct.

I wonder if my approach is correct, any help please.

2

There are 2 best solutions below

2
On

$$x_1^2=(x_2+\cdots+x_n)^2\leq (x_2^2+\cdots+x_n^2)(n-1)=(1-x_1^2)(n-1)$$ by Schwarz inequality. The bound $x_1\leq \sqrt{\frac{n-1}{n}}$ is reached by taking $x_2=\ldots=x_n=-\frac{1}{\sqrt{n(n-1)}}.$

1
On

Let's start by loosening the requirements to $\sum x_i = 0$ and $\sum x_i^2 \leq 1$. Note that this loosened problem has the exact same optimal solution as your problem. That's because if $\sum x_i^2 < 1$, then you have room to increase $x_1$ by a miniscule amount, and decrease $x_2$ by the same amount. This won't change $\sum x_i$, and while it might increase $\sum x_i^2$ a little, we have room for that.

Why did I loosen the requirements? Because it gives us some extra freedom to manipulate the variables and still stay within the requirements. Most notably, to prove the following:

In an optimal solution, all the $x_i$ for $i \geq 2$ are equal.

Why? Assume you can find $x_i\neq x_j$, with $i, j \geq 2$. Then we may replace both of them with $\frac{x_i + x_j}2$. This does not change $\sum x_i$, but it will decrease $\sum x_i^2$, as $$ \left(\frac{x_i + x_j}2\right)^2 + \left(\frac{x_i + x_j}2\right)^2 = \frac{x_i^2 + 2x_ix_j + x_j^2}2 < x_i^2 + x_j^2 $$ and thus performing this substitution will give us room to increase $x_1$ and decrease $x_2$ a little, as described above.

(This is possible to prove without loosening the requirements, but then you have to be very specific about how much you increase $x_1$ and decrease $x_2$ after swapping out $x_i$ and $x_j$, and I didn't want to have to bother with solving those equations.)

Once you know that an optimal solution must have all the $x_i, i\geq 2$ equal, your requirements suddenly become two equations in two unknowns, and the problem becomes a lot easier to solve with just standard algebra.

For the pedantic out there, I haven't actually proven that an optimal solution exists, only what a potential optimal solution has to look like. Noting that the space of allowed values of $(x_1, x_2, \ldots, x_n)$ is compact and the projection function to $x_1$ is continuous is probably the fastest way to show that a solution exists, assuming these are known theorems at this stage.