Check numerical solution to constraint optimization

47 Views Asked by At

I want to optimize a function $f:\mathbb{R}^2\to\mathbb{R}$ subject to $x_1\geq0$ and $x_1+x_2\geq0$. I know $f$, $\frac{\partial f}{\partial x_1}$ and $\frac{\partial f}{\partial x_2}$ in closed-form but as they are nasty, I use a computer to numerically solve the problem yielding $(\hat{x}_1,\hat{x}_2)$.

  • Will the numerical solution still (approximately) satisfy the two partial derivatives, i.e. $\frac{\partial f(\hat{x}_1,\hat{x}_2)}{\partial x_1}\approx0$ and $\frac{\partial f(\hat{x}_1,\hat{x}_2)}{\partial x_2}\approx0$?

  • If not, is there another equation which the numerical solution should (come close to) satisfy that takes the constraints into account?

1

There are 1 best solutions below

1
On BEST ANSWER

If the optimal point is on the interior of the feasible set (and not on a boundary) then the the partial derivatives at the optimal point will be approximately equal to 0.

If the optimal point is on the boundary, then it is very likely that the partial derivatives won't equal 0. (There's no reason why they should.)

If $f$ is not convex and you use a method like gradient descent (or one of its many descendants), you are only guaranteed to approximate a locally optimal point rather than a globally optimal point