Help to clarify procedure to solve constrained optimisation problem

68 Views Asked by At

The following are excepts from my textbook:

enter image description here

In the first box, it says that point $P_0$ may be a point such that grad g$(P_0)=\vec{0}^{\,}$ if $P_0$ is the max/min of $f$ subject to the constraint $g=c$. To me, this means that $P_0$ is a stationary point of the constraint, $g=c$.

Why do we need to consider the above condition too? I thought the first two conditions ($f$ and $g$ have equal gradient vectors or $P_0$ is on the endpoint of $g=c$) are all we need to look at when determining the optimum point of $f$ subject to a constraint. This is based on looking at diagrams like this: enter image description here

I'd also like to request an example where $P_0$ meets the 3rd condition, preferably using functions of 2 variables so that I can visualize it.

1

There are 1 best solutions below

2
On BEST ANSWER

There are two separate (but related) problems: $P_E: \ \min \{ f(x) | g(x) = c \}$ and $P_I: \ \min \{ f(x) | g(x) \le c \}$. The first box above addresses $P_E$, the second $P_I$.

If $\hat{x}$ solves $P_I$, then either $g(\hat{x}) = c$ or $g(\hat{x}) < c$. In the first case, the conditions from the first box apply, in the second case, $\hat{x}$ is a local minimum for the unconstrained problem $P_U: \ \min f(x)$, so we look for points that have $\nabla f(x) = 0$.

Addendum: If $g(\hat{x}) = c$, then $\hat{x}$ is also a solution of $P_E$. To use Lagrange multipliers, the gradient of the constraint $\nabla g(\hat{x})$ must be non-zero (otherwise the constraint may completely determine the solution).

For example, in the problem $\min \{ x | x^2 = 0 \}$, it is clear that $\hat{x}=0$ is the solution, but you cannot apply the Lagrange multiplier technique.