Lagrange multipliers - perturbation of constraints

732 Views Asked by At

I have been spending some time learning about Lagrange multipliers lately.

Something is puzzling me though. Reading around (also on Wikipedia) I saw multiple time the interpretation that lagrange multipliers represent the rate of change of the optimal value of the function when a constraint is perturbed.

More formally, when solving the problem: $ min f(x)$ such that $h_1(x) = 0, h_2(x)=0.. h_i(x) = r, ... h_m(x) = 0 $ and when denoting the optimal solution by $x(r)$ we get that $\frac{df}{dr}x(r) = - \lambda_i$.

Firstly, should it not be $\frac{df(x(r))}{dr}$ ?

Secondly, which lambdas?? We are varying the $r$, and from what I see there is no guarantee that the Lagrange multipliers will be the same for all $r$ - even though the Lagrange condition is the same, the constraints are different). Is it for $r$ = 0? In that case, the proof I have seen so far would not work (it just uses the chain rule on the function, and then uses the Lagrange condition being satisfied at $x(r)$ with the multipliers being set to $\lambda_i$.)

Can someone please explain this to me? It is not clear where the Lagrange multipliers come from in this case.

1

There are 1 best solutions below

1
On BEST ANSWER

You are right that it should be ${d\over dr}f(x(r))$.

The result will be true for any $r$. When you solve the Lagrange problem, you will find $x=x(r)$ and $\lambda=\lambda(r)$. Substitute $x=x(r)$ into the objective function, and you get the general formula $${d\over dr}f(x(r))=-\lambda_i(r)$$