When could a small perturbation in the objective function leave the solution set unchanged?

18 Views Asked by At

Let $R: \mathbb R^p\to\mathbb R^+$ be an (almost everywhere) twice differentiable function we want to minimize. For $\lambda\ge0 $, define the $\lambda$-regularized objective as $$ \min_{\theta\in \mathbb R^p}\ \underbrace{R(\theta) + \frac\lambda2|\theta|^2 }_{=:\ R_\lambda(\theta)}, \tag1$$ where $|\cdot|$ is the standard Euclidean norm. $R$ is not convex, so the argmin of $R_\lambda$ is not reduced to a single point.

I have the following question :

does there exist a $\varepsilon > 0$ such that

$$\arg\min_{\theta\in\Theta} R_\lambda(\theta) =\arg\min_{\theta\in\Theta} R(\theta) \ \text{ for all } \lambda\in[0,\varepsilon] \ ?$$

(again, note that these argmins are not singletons). In words, does adding a sufficiently small perturbation leave the optimization problem unchanged ?

I expect this to be possible, since by a Lagrangian optimization argument, it is possible to show that problem $(1)$ is equivalent to minimizing $R(\theta)$ under the constraint that $\theta$ remains in a ball whose radius is a non-increasing function of $\lambda$, so for vanishingly small values of $\lambda$ the constraint should not be active (see e.g. here, here or here).

It has however been shown in an answer to this previous question of mine that such an $\varepsilon$ need not exist in general. Hence I am looking for a general enough sufficient condition under which the answer to my question is positive.

Thanks in advance for your help.