I want to solve a robust optimization problem (worst-case optimization) of the form
$$ \min_{x} \max_q f(x,q) \tag{1} $$
with $x \in \mathbb{R}^n$ and $q \subset \mathbb{R}^m$ where $q_i \in [\underline{q}_i, \overline{q}_i]$ and $\underline{q}_i < \overline{q}_i \,\forall i \dots m$. The variables in $x$ are the desicion variables, the $q$ are uncertain parameters (box-uncertainty).
Assume now we can analytically solve the following problem
$$ M(q) = \min_x f(x, q) \tag{2} $$
and i.e. derive the minimum $M(q)$ of $f$ over $x$ as a function of the parameters $q$. Further assume I can then solve the following optimization problem
$$ \max_{q} M(q). \tag{3} $$
Question: Is the solution to $(3)$, computed with the solution of $(2)$ also a solution of $(1)$? I.e., can I solve a problem like $(1)$ by first finding a parameter dependent minimizer for $x$ and then a parameter combination $q$ that is a maximizer for this minimum?
According to the minimax theorem, if $f$ is continuous function which is concave in $q$ and convex in $x$ (roughly speaking), then $$ \min_x\max_q f(x,q) =\max_q\min_x f(x,q)=\max_q M(q) $$ holds. Hence in this case, your method can solve the problem. However, in general cases, we can say at most $$ \min_x\max_q f(x,q) \ge\max_q\min_x f(x,q), $$ and equality may not hold. In this case, your method does not give the solution.