Background: I want to solve an optimization problem like
$$\begin{align*}\text{minimize }&f(x)\\ \text{subject to }&\|x\| \le 1.\end{align*}$$
where $x \in \mathbb{R}^d$, $\|\cdot\|$ is the $L_2$ norm, and $f(x)$ is a concave, smooth, continuously differentiable function $f:\mathbb{R}^d \to \mathbb{R}$. I'd like to use an existing solver.
One natural heuristic approach is to choose a constant $c \in \mathbb{R}$, solve the optimization problem
$$\begin{align*}\text{minimize }&f(x) + c \cdot \|x\|^2\\ \text{subject to }&x \in [-1,1]^d,\end{align*}$$
and do a one-dimensional search for $c$ such that the resulting solution satisfies $||x|| \le 1$ and makes $f(x)$ as small as possible. This raises the following theoretical question.
My question: Is this heuristic always guaranteed to find the optimal solution?
In other words, does there always exist a constant $c \ge 0$ such that the optimal solution to
$$\begin{align*}\text{minimize }&f(x) + c \cdot \|x\|^2\\ \text{subject to }&x \in [-1,1]^d,\end{align*}$$
is also an optimal solution to
$$\begin{align*}\text{minimize }&f(x)\\ \text{subject to }&\|x\| \le 1?\end{align*}$$
Or, are there are conditions on $f$ that are sufficient that such a $c$ must exist? In my situation, I expect that $f$ is "nice", and it would be useful to characterize conditions under which this heuristic should be expected to work.
Also, is there a reason to believe that for this value of $c$, $f(x) + c \cdot \|x\|^2$ is a convex function of $x$?
Let's call $x^*$ the solution to the heuristic, for any particular value of $c$. It looks like for $c=0$, we have $\|x^*\| \ge 1$, and $\|x^*\| \to 0$ as $c \to \infty$. Assuming $x^*$ decreases monotonically and continuously as a function of $c$, then there should exist a single value of $c$ such that $\|x^*\|=1$. Must this $x^*$ be an optimal solution to the original optimization problem?
Not necessarily, but it could be true if $f$ doesn't decrease "too quickly" outside the unit ball. If we're lucky ($f$ doesn't decrease too quickly outside the unit ball), then such a $c$ might exist. If we're unlucky ($f$ decreases extremely quickly outside the unit ball), there are no guarantees.
Since $f$ is concave, the optimal solution $x^*$ to the original optimization problem must lie on the perimeter of the unit circle, i.e., $\|x^*\|=1$. By applying the Karush-Kuhn-Tucker (KKT) conditions to the original optimization problem (i.e., minimize $f(x)$ subject to $\|x\|^2 -1 \le 0$), we find that there exists $\mu>0$ such that $\|x^*\|=1$ and $\nabla f(x^*) = -2\mu \cdot x^*$.
Now set $c=\mu$ and let $g(x) = f(x) + c \cdot \|x\|^2$. Note that
$$\nabla g(x) = \nabla f(x) + 2c \cdot x = \nabla f(x) + 2\mu \cdot x.$$
By the above, we have
$$\nabla g(x^*) = 0.$$
Thus, $x^*$ is a candidate to be a local minimum for the modified optimization problem, for this value of $c$. However, even if it is a local minimum, it might or might not be a global minimum, depending on how quickly $f$ decreases outside the unit ball. If $f$ decreases extremely rapidly outside the unit ball, for the value of $c$ chosen above we might find that $g$ attains its global minimum on the perimeter of the square $[-1,1]^d$, and for other values of $c$, the global minimum of $g$ does not coincide with $x^*$ -- so the condition fails.