Suppose I have a loss function $A >0$ on $x$, and I want to find a minimal x (minimial in terms of $\lVert x \rVert$) that maximizes $A(x)$.
The question is how can I formularize it so that I can numerically compute the sought $x$?
One way I could think of is $$ x^* = \text{argmax}_x E(x), $$ $$ E(x):= A(x) - \lVert x \rVert. $$ But this would have some issues on prioritizing $A(x)$ and $x$ because of unbalanced scale between $A(x)$ and $\lVert x \rVert$. Moreover, $A(x)>0$ always while $-\lVert x \rVert <0$, possibly making $E(x)$ fluctuate around $0$, maybe leading to bad optimization of $E$.
Another option would be $$ E_0(x):= A(x) + 1/\lVert x \rVert. $$ But this is probably worse than the above as the fraction is nonlinear and has a singularity.
Would $E(x)$ above be the best (and simplest) computable form? And Can there be a closed form solution for $x^*$ in terms of the gradient $\nabla_x A$?
Note*: please assume $x$ is a vector.
Assumption. For any local maximum points $x^*_1, x_2^*$, $A(x^*_1) = A(x^*_2)$. (For examples, sinusoidal functions.)
Let $x^*$ maximize $A(x)$, then you want to find $$\min\left\{ ||x|| : A(x) \geq A(x^*) \right\}.$$ You may be able to rewrite the constraint in a way that you do not need to compute $x^*$ separately via duality, but that is mostly a theoretical result. Practically you want to find $x^*$ first.