In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case the constraints are typically parameterized. $$ \min_{x \in X}\; f(x) $$ $$ \text{subject to:} $$ $$ g(x,y) \le 0, \;\; \forall y \in Y $$ where $$ \begin{split} f&:\mathbb R^n \to\mathbb R\\ g&:\mathbb R^n \times\mathbb R^m \to\mathbb R\\ X& \subseteq\mathbb R^n\\ Y& \subseteq\mathbb R^m \end{split} $$ If you go through wikipedia, you will find the above description. You will also see that there are hardly reports of any recent advances in the field of semi-infinite programming, which is very discouraging.
I have been bothered by a non-convex semi-infinite programming problem for a long time. The problem is almost convex in the sense $f,g$ are both convex. More specifically, $f$ is quadratic in $x$; $g$ is quadratic in both $x$ and $y$. However, the domain $X$ is defined by a norm equality constraint: $$ X:=\{x\in\mathbb R^n|\|x\|=1\}~(\text{say 2-norm}) $$ which is not convex. I am certainly not an expert in optimization, nor do I have time to go through many optimization books (which I actually did) to find an satisfying answer. So I would say it may or may not be a problem about semi-infinite programming.
I would hope that someone could throw some light on me, point me to the right direction, to recent advances in semi-infinite programming, or anything that may help me solve my problem.
Actually, I believe the equality constraint on the upper level variables $$x$$ is not a big problem.
What is more important is your assumption that quadratic functions are always convex. But the functions $$h_1(x)=x^2$$ and $$h_2(x)=-x^2$$ are both quadratic but one is not convex.
A very useful characteristic of your problem would be if your "lower level" problem $$ \min_{y\in Y} -g(x,y) $$ was a convex optimization problem.
In that case adaptive discretization methods could be easily applied. Alternatively in this case, your problem reduces to a nonconvex optimization problem, because you can replace the semi-infinite constraint with the constraints (assuming $Y=\mathbb{R}^n$): $$g(x,y)\le 0$$ $$\frac{\partial g}{\partial y}(x,y)=0$$