Maximize weights in a weighted average using Lagrange multipliers

481 Views Asked by At

For values $\mathbf{a}=(a_1, a_2, ..., a_N)$ with corresponding weights $\mathbf{w}=(w_1, w_2, ..., w_N)$ (assume $\sum_{i=1}^{N} w_i = 1$ and $0 \le w_j \le 1$ for $j \in {\{1,2,...,N\}}$), the weighted average is

$$ \bar{a} = \sum_{i=1}^{N} w_i a_i $$

How can I solve for the weights $\mathbf{w}$ that maximize $\bar{a}$ using the method of Lagrange multipliers?

It is obvious that the solution will give a value 1 to the weight that corresponds to the largest value in $\mathbf{a}$, and 0 for the remaining weights, e.g. $\mathbf{a}=(1,2,3)$ will give $\mathbf{w}=(0,0,1)$ and $\bar{a}=3$. However, I am somehow not able to show this using Lagrange Multipliers.

What I have tried is to solve the following set of equations ($j \in {\{1,2,...,N\}}$) $$ \dfrac{\partial \bar{a}}{\partial w_j} = \lambda \dfrac{\partial g}{\partial w_j} $$ with the constraint equation

$$ g = \sum_{i=1}^{N} w_i = 1 $$

I then calculate $\dfrac{\partial \bar{a}}{\partial w_j} = a_j$ and $\dfrac{\partial g}{\partial w_j} = 1$, and thus get $a_j = \lambda$ for all $j$ which is obviously not always true, and moreover the equations are independent of $w_j$'s to solve for.

It seems like a straightforward problem. How is this done correctly?

2

There are 2 best solutions below

0
On

Both the function You want to optimize and the constraint are $N$ dimensional plane and their normal vectors do not necessarily coincide. This lead to one of the followings:

  1. $a_{i}$ are all equal, mean any weights combination yields the same weighted average which is $a_{i}$ itself or
  2. $a_{i}$ are not all equal, means the extrema is located on the ends of “interval” i.e. one weight equals to $1$ while the others equal to $0$. Maxima if the weight of the biggest $a_{i}$ is $1$ and minima if the weight of the smallest $a_{i}$ is $1$
0
On

You cannot use the Lagrangian method while ignoring $0 \le w_j \le 1$ constraints. In fact, for optimization problems with inequality constraints, you need to use the generalized version of the Lagrangian method known as KKT conditions. However, if you want to use the classical Lagrangian method, you need to somehow modify your optimization problem such that it does not have an inequality constraint. For instance, letting $w_i = v_{i}^{2}$, you can solve the following optimization problem:

minimize $\sum_{i=1}^{n} v_{i}^{2} a_i$ subject to $\sum_{i=1}^{n} v_{i}^{2} =1.$ After finding all $v_i$s in the previous problem, you can retrieve all the $w_i$s using the relationship between $v_i$ and $w_i$.

P.S. I leave it to you to convince yourself why these two optimization problems are equivalent.