How to maximize scalar product using Lagrangian methods

2.2k Views Asked by At

maximize $U(x) = u \cdot x$ with respect to $p \cdot x = w$

given that $u, p, x \in \mathbb{R}^L_+$, $w \in \mathbb{R_+}$.

I can solve it classically:

\begin{align} u \cdot x &= \sum u_i x_i \\ &= \sum \frac{u_i}{p_i} p_i x_i\\ &\le \max{\frac{u_i}{p_i}} \sum p_j x_j\\ &= \max{\frac{u_i}{p_i}} \cdot w \end{align}

with equality when $x = \frac{w}{p_i} e_i$ where $i = \text{argmax}_j \frac{u_j}{p_j}$ and $e_i$ is the $i$-th basis vector.

But I would like to see a proof using Lagrange multipliers.

2

There are 2 best solutions below

0
On BEST ANSWER

Let $f(x_1,\ldots,x_n)=u_1x_1+\ldots+u_nx_n$ and $g(x_1,\ldots,x_n)=p_1x_1+\ldots+p_nx_n$, where $\textbf{u}\ge 0$ and $\textbf{p}> 0$ (coordinate-wise).

Solve

\begin{align} \qquad\qquad\qquad\qquad \max\,\,\, &f(x_1,\ldots,x_n) \qquad\qquad\text{subject to} \\ &g(x_1,\ldots,x_n) = a \\ &x_1 \ge 0 \\ &...\\ &x_n \ge 0 \\ \end{align} This is a constrained optimization problem with inequality constraints, which is solved by Kuhn-Tucker method (wiki and original paper). We use $\lambda$ as multiplier on the equality constrain and $\mu_i$ as multipliers on the non-negativity constraints. Kuhn-Tucker conditions:

\begin{align} u_i &= \lambda p_i + \mu_i, \text{ for } i=1,...,n \\ a &= p_1x_1+\ldots+p_nx_n \\ \mu_i &\le 0, \text{ for } i=1,...,n \\ \mu_i x_i &= 0, \text{ for } i=1,...,n \\ \end{align} Clearly, not all $x_i$ equal to $0$. Let $i^*$ denote the index for which $x_{i^*}>0$. Then $\mu_{i^*}=0$ and $\lambda = \frac{u_{i^*}}{p_{^*}}$. It then follows that for indices $i\ne i^*$ we must have $\mu_i = u_i - \frac{u_{i^*}}{p_{^*}}p_i$ and $\frac{\mu_i}{p_i} = \frac{u_i}{p_i} - \frac{u_{i^*}}{p_{^*}}$ which must be non-positive ($p_i>0$ by assumption).

Thus, $i^*$ must be the index for which $\frac{u_i}{p_i}$ is the largest. And for all other indices $x_i=0$ because $\mu_i >0$, so that $x_{i^*} = \frac{a}{p_{^*}}$.

2
On

The problem, as posed, has no solution unless $u=cp$ are collinear vectors in which case it becomes trivial. Let's fix some notation:

$f(x_1,x_2) = u_1x_1+u_2x_2$ be the objective function,

$g(x_1,x_2)=p_1x_2+ p_2x_2$ and $g(x_1,x_2)=a$ be the constraint (the case of $\mathbb{R^d}$ is similar).

Wlog $p_1\ne 0$. Solve for $x_1 = \frac{a}{p_1}-\frac{p_1}{p_2}x_2$ and substitute

$F(x_2) = u_1\frac{a}{p_1} + (u_2-u_1\frac{p_2}{p_1})x_2$ is the new objective function. If $u\neq cp$, then the term in parenthesis is not $0$ and $F$ is unbounded.

If we try to apply the Lagrange multiplier method, we find conditions: $\textbf{u} = \lambda\textbf{p}$ which does not have a solution.