Consider a function $f(x,y)$, convex in $x$ and concave in $y$. we are interested in the following optimization problem, \begin{align} \min_{x \in D_x} \max_{y \in D_y} f(x,y) \end{align} Because of convexity in $x$ and concavity in $y$, we have \begin{align} \max_{y \in D_y} \min_{x \in D_x} f(x,y)= \min_{x \in D_x} \max_{y \in D_y} f(x,y) \end{align} Question: In which conditions, we can fix $y=y^k$ and minimze $x$ and then fix $x=x^{k}$ and maximize $y$ and converges to global saddle point of the original problem?
If conditions are not known, how can we solve this problem using methods like Augmented Lagrangian in $x$ and some other method in $y$?
EDIT: I searched a lot on the internet, but I didn't find anything completely relevant to what I need. In my problem case, $f(x,y)$ is of the form,
\begin{align}
f(x,y=(u,Q))=x^T u- \rho x^T Q x\cr
\end{align}
$y=(u,Q)$ is the second argument of the function and the function is linear in $y$.
$D_x$ is the set $x\in [l,u]$ and $D_y$ is a complex set consists of semidefinite constraints. It is very simple to project on the set $D_x$, although the set $D_y$ is more complex but it's not so difficult to project on to.