In reading paper [1], there is a comment about the numerical stability of 'min' and 'argmin' operations in optimization which I don't understand. For background, there is a variational characterization of two functions $\overline{Q}_X$ and $Q_X$, (I don't think the details of them are important)
$$\overline{Q}_X(p) = \text{min}_x f(x, p),$$ $$Q_X(p) = \text{argmin}_x f(x, p).$$
These formulas are useful for incorporating $\overline{Q}_X$ and $Q_X$ into optimization problems since minimization over $p$ can be done through joint minimization with an auxiliary variable x. However, the author makes a comment, which I paraphrase: "As the formulas above underscore for anyone familiar with the relative behavior of 'min' and 'argmin' in numerical optimization $Q_X$ is inherently less stable than $\overline{Q}_X$".
Can anyone elaborate upon this comment? Evidently I am not familiar with the relative behavior between 'min' and 'argmin'. He is saying that 'min' operations are more stable than 'argmin' operations? Why? I think I see that using 'min' rather than 'argmin' is easier to model, but I'm not sure I see the relevance of stability.
[1] Rockafellar, R. Tyrrell, and Johannes O. Royset. "Random variables, monotone relations, and convex analysis." Mathematical Programming 148.1-2 (2014): 297-331.
As an extremely simple example, consider
$$ \text{argmin}_x ((x-1)^{2\ }+a)(x+1)^{2}$$
as $a$ varies from a small positive number to a small negative number.
Here is a Desmos graph you can play with to see what happens, or just look at these two screen shots:
$a = +0.05$
$a= -0.05$
You can see how the min moves continuously from zero to small negative values, but the argmin jumps from $x = -1$ to $x = +1$.
BTW, this is just an example, but you can show that, if the quantity you are minimizing is a continuous function of its parameters, then its minimum is also a continuous function of those parameters. But this example shows that you will not be able to prove the same thing for argmin.