Maximizing a quadratic objective subject to sum constraints

520 Views Asked by At

Is it possible to solve the following problem?

$$ \max \,{x^ \top }x \\[0.1in] st: \sum\limits_{i = 1}^n {{x_i} = 1} \\[0.1in] 0 \le x_i \le 1 $$

Intuitively it happens when any one of the $n$ $x$'s is $1$ and all others are zero. Is there a way to prove this ?

It is easier to see that the minimum occurs at each x taking the value $1/n$ using Lagrange multipliers

2

There are 2 best solutions below

2
On BEST ANSWER

It can be proven using the Karush-Kuhn-Tucker conditions.

For each inequality $x_i \le 1$ introduce its associated Lagrange multiplier $\mu_i$ and for the equality constraint $\sum x_i = 1$ introduce its associated Lagrange multiplier $\lambda$. The KKT conditions have three parts:

a) stationarity

$$ \nabla \left(\sum_{i=1}^n x_i^2\right) = \sum_{i=1}^n \mu_i \nabla \left(x_i-1\right) + \lambda \nabla \left(\sum_{i=1}^n x_i-1 \right)$$

b) primary feasibility

$$\sum x_i = 1$$

c) complementary slackness

$$\mu_i(x_i-1) = 0$$

Stationarity immediately becomes $$ 2x_i - \lambda - \mu_i =0 , i=1, \ldots,n.$$

From complementary slackness we notice that it is impossible to have all $x_i=1$ since that would violate primal feasibility; in fact, only for one unique index $k \le n$ we have $x_k=1$. Stationarity then implies $\mu_k=2, \lambda=0$, while for all other indices $i \ne k$ we obtain $x_i=\mu_i=0$.

2
On

This follows immediately from the convexity of $ x^2$, which tells you that you want to select values from the endpoints.