Lagrange's method

202 Views Asked by At

Suppose that I am asked to determine the maximum and the minimum of a continuous and differentiable function $$ f(x, y, z)$$ with a constraint $$ g(x, y, z)=0.$$

I have some trouble understanding the method, even more in this case when it's harder to picture what the function $f$ looks like.

So far I have proven that when $f$ is restricted to those points for which $g(x,y,z)=0$, if it has a local maximum/minimum, the gradient of $f$ is parallel to the gradient of $g$ at that point, i.e. $$ \nabla f-\lambda \nabla g=0. $$ My question is: is it enough just to consider the critical points, i.e. those $(x, y, z)$ for which the gradient of Lagrange function $$ L \left(x, y, z, \lambda \right)=f(x, y, z)-\lambda g(x, y, z) $$ is zero, in order to find the maximum and minimum of $f$? Many examples of optimization exercises I have seen only do that.

3

There are 3 best solutions below

0
On

Yes, this is enough (assuming everything is differentiable and nice).

That's because if we take the gradient of the function $L$, the first three components together become $$\nabla f(x,y,z)-\lambda\nabla g(x,y,z)$$and the final and forth component (i.e. $\frac{\partial L}{\partial\lambda}$) becomes $g(x,y,z)$. We have a solution iff both of these are zero. Which is to say, iff the gradient of $L$ is zero.

4
On

Essentially, yes. Assuming everything is smooth enough, the only exception is if you have other constraints on $\mathbf x$. For example, if you add the requirement $x_1 \le 3$; it is possible to have a maximum at the point $x_1 = 3$, so you need to separately check that case. (Edit: If you are interested in global upper and lower bounds rather than local maxima/minima which are actually attained by the function, you also need to check any non-compact directions. See below.)

The reason follows almost immediately from the result you quote, $\nabla f-\lambda \nabla g=0$ (which intuitively says that $f$ is stationary except possibly for some variation in the direction $\nabla g$ which is perpendicular to the constraint surface). The derivatives of $L$ with respect to $\mathbf x$ give the equations $\nabla f-\lambda \nabla g=0$, whilst the derivative of $L$ with respect to $\lambda$ imposes $g = 0$.

Note that the maxima/minima of $f$ subject to $g=0$ need not be maxima/minima of $L$, only stationary points.


Edit: If you are interested in upper and lower bounds, which may not be actually be attained by the function in its domain and hence aren't strictly maxima or minima, then you need to explicitly consider what happens as you move off in non-compact directions. If you like, you could think of this as adding a boundary, e.g. at infinity, that you have to worry about. This is already the case for normal unconstrained functions like $f(x) = x^2$ (no upper bound, no maxima) or $f(x) =(x^2-1)^2$ (which has no upper bound, but does have local maximum at $x=0$) or $f(x) = 1/((x-1)^2+1)+1/((x+1)^2 +1)$ (which has a lower bound of 0 but a local minimum at $x=0$ of $f=1$).

Or consider $f(r, \theta) = u(\theta) (1+e^{-r})$ for something like $u=1+\cos^2(\theta)$. Note this last function has a lower bound $u(\theta)$ in each direction, and hence an overall lower bound of the lowest value $u$ can take, which is $1$. But the function is strictly larger than one everywhere in its domain, so there is no point where it takes the "minimum" value of 1.

2
On

Not enough. You also need to check the boundaries of support for the function. The minimum or maximum may occur there.