simple optimization with inequality restrictions

135 Views Asked by At

Today I started learning optimization with restrictions and I would be interested in solving the next problem (both with equality and with inequality restrictions)

The problem comes from a well known problem related to artificial intelligence under the name of adversarial examples. Basically the problem I want to solve can be defined in terms of:

$\underset{\theta}{argmax}\,\, \{ {J(x) + \theta^T \cdot \nabla_xJ(x)}\}\,\, s.t\,\, ||\theta||_{\inf}\leq \epsilon$

I look many for many way of solving this problem, however I cannot get to the solution:

$\theta = \epsilon\cdot sgn{\nabla_xJ(x)}$

Where sgn is the sign function (-1 for negative and 1 for positive). I will appreciate also the solution when the restriction is $||\theta||_{\inf}= \epsilon$

I tried to solve both of them using lagrange multipliers. Moreover, $\epsilon$ is any real value (positive) number. Also remark that the variables are vectors.

Thanks in advance.

1

There are 1 best solutions below

0
On

We can rewrite the problem as

$$\arg\max_\theta \theta^T \nabla_xJ(x)$$

subject to $$-\epsilon \le \theta_i \le \epsilon, \forall i \in \{1,\ldots, n\}$$

which is a linear programming problem, furthermore, the problem is separable in the sense that we can optimize each variable separately.

By using the property that two numbers of the same sign multiplying together would give us a nonnegative number and since we want to maximize the quantity, we will choose $\theta_i$ to share the same sign as $\nabla_x J(x)$ and also we would want to maximize it's magnitude as well.

Hence, the solution is $\theta_i = \epsilon sgn(\nabla_x J(x)_i) .$

Remark: Even for the equality case, the solution remains the same.