Let $f :\mathbb{R}^n \mapsto \mathbb{R}$ be a closed convex function and $\mathbf{a}, \mathbf{b}, \mathbf{c} \in \mathbb{R}^n$ be given parameters. I have a problem that can be roughly represented as
$$ \sup_{\theta \in \Theta \subset \mathbb{R}} - f(\theta) + \inf_{\mathbf{x},\mathbf{y}: \lVert \mathbf{x}\rVert \leq 1, \lVert \mathbf{y}\rVert \leq 1} \{ \mathbf{x}^\top \mathbf{a} + \mathbf{y}^\top \mathbf{b} \ : \ \mathbf{x}+ \mathbf{y} = \theta \cdot \mathbf{c} \}.$$
I would like to understand when I can change the order of $\inf$ and $\sup$, to obtain something like $$\inf_{\mathbf{x},\mathbf{y}: \lVert \mathbf{x}\rVert \leq 1, \lVert \mathbf{y}\rVert \leq 1} \mathbf{x}^\top \mathbf{a} + {\mathbf{y}}^\top \mathbf{b} + \sup_{\theta \in \Theta \subset \mathbb{R}}\{-f(\theta) \ : \ \mathbf{x} + \mathbf{y} = \theta \cdot \mathbf{c}\}.$$
I understand that minimax theorem works when the constraints are separable and the objective is convex-concave, but here I have the other way around: The objective is separable and there is a single common linear constraint.
Attempt
I have tried to take the equality constraint to the objective via Lagrange duality to obtain $$ \sup_{\theta \in \Theta \subset \mathbb{R}} - f(\theta) + \sup_{\mathbf{\lambda}\in \mathbb{R}^n} \; \inf_{\mathbf{x},\mathbf{y}: \lVert \mathbf{x}\rVert \leq 1, \lVert \mathbf{y}\rVert \leq 1} \{ \mathbf{x}^\top \mathbf{a} + \mathbf{y}^\top \mathbf{b} + \mathbf{\lambda}^\top(\mathbf{x} + \mathbf{y} - \theta \cdot \mathbf{c})\}.$$ I can now change the order of $\sup$ without loss of generality $$ \sup_{\mathbf{\lambda}\in \mathbb{R}^n} \; \sup_{\theta \in \Theta \subset \mathbb{R}} - f(\theta) + \inf_{\mathbf{x},\mathbf{y}: \lVert \mathbf{x}\rVert \leq 1, \lVert \mathbf{y}\rVert \leq 1} \{ \mathbf{x}^\top \mathbf{a} + \mathbf{y}^\top \mathbf{b} + \mathbf{\lambda}^\top(\mathbf{x} + \mathbf{y} - \theta \cdot \mathbf{c})\}.$$
Now I want to say something like "if $\mathbf{x} + \mathbf{y} \neq \theta \cdot \mathbf{c}$ then the term $\mathbf{\lambda}^\top(\mathbf{x} + \mathbf{y} - \theta \cdot \mathbf{c})$ is unbounded" to somehow eliminate the Lagrange variables, but this is not possible since $\mathbf{x}$, $\mathbf{y}$ and $\theta$ are fixed upon seeing $\mathbf{\lambda}$.
Omitted detail
Note that, in an optimization problem, I have a constraint of form $$ \sup_{\theta \in \Theta \subset \mathbb{R}} - f(\theta) + \inf_{\mathbf{x},\mathbf{y}: \lVert \mathbf{x}\rVert \leq 1, \lVert \mathbf{y}\rVert \leq 1} \{ \mathbf{x}^\top \mathbf{a} + \mathbf{y}^\top \mathbf{b} \ : \ \mathbf{x}+ \mathbf{y} = \theta \cdot \mathbf{c} \} \leq 0 $$ where $\mathbf{c}$ is actually an optimization variable. So I will need to add some additional constraints on $\mathbf{c}$ so that eventually we have $\mathbf{x}+ \mathbf{y} = \theta \cdot \mathbf{c}$, but for that I need to characterise the optimal solution to the above form.