Consider the optimization problem $$ \mathcal{P}_1: \qquad \min_{x \in \mathbb{R}^n} c^\top x \quad \text{sub. to } \ g(x,y_i) \leq 0 \ \ \forall i = 1,2,...,M$$
where $c \in \mathbb{R}^n$, and $g:\mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$ (scalar valued) is such that for all $y \in \mathbb{R}^m$ the map $g(\cdot,y)$ is convex, and for all $x \in \mathbb{R}^n$ the map $g(x,\cdot)$ is convex as well.
Denote by $x^*$ an optimizer of $\mathcal{P}_1$.
Given the vectors $\{y_1,...,y_M\}$ from $\mathcal{P}_1$, define the set $$\mathcal{F} := \{ (F,f) \in \mathbb{R}^{1 \times m} \times \mathbb{R} \mid F y_i \leq f \ \ \forall i = 1,...,M \} = \{ (F,f) \in \mathbb{R}^{1 \times m} \times \mathbb{R} \mid \max_{i \in [1,M]} F y_i \leq f \}.$$
For given $(\bar{F},\bar{f}) \in \mathcal{F}$, consider the optimization problem $$ \mathcal{P}_2(\bar{F},\bar{f}): \qquad \min_{x\in \mathbb{R}^n } c^\top x \quad \text{sub. to } \ g(x,y) \leq 0 \ \ \forall y \in \{ z \in \mathbb{R}^m \mid \bar{F} z\leq \bar{f} \}.$$
Denote by $X_{(\bar{F},\bar{f})}^*$ the set of optimizers of $\mathcal{P}_2(\bar{F},\bar{f})$.
Say if given an optimizer $x^*$, there always exists a couple $({F}^*,{f}^*)$ such that $x^* \in X_{({F}^*,{f}^*)}^*$.
Comment. The idea is to exploit the convexity of $g(x,\cdot)$ to claim that we can always consider a worst-case half plane $\{y \in \mathbb{R}^m \mid F^* y \leq f^*\}$.
The linear case is here.
I do not believe that the problems are equivalent. I know that $$g(x,y_i)\leq 0, ~i=1,2,\dots,M \quad\Longrightarrow\quad g(x,y)\leq 0 ~ \forall y\in\mathcal{Y}$$ where $\mathcal{Y}$ is the convex hull of the points $y_i$. This follows from the convexity of $g$ over $y$, and does not require joint convexity. Thus if you replaced your half-plane with $\mathcal{Y}$, you would have equivalence.
But any one of the half-planes described by $\mathcal{F}$ contains $\mathcal{Y}$ and is larger than $\mathcal{Y}$. Thus there are points in the half-plane that could obtain $g(x,y)>0$ in the first model, but not in the second. Thus the second model has a more restrictive constraint set, and would yield a higher objective value.
EDIT: Without a fuller understanding of the purpose behind your efforts, it's hard to know what the right approach is. I am assuming that there is a reason why you do not want to solve the problem in the original stated form: that is, with $M$ constraints, one for each point $y_i$.
But suppose you can efficiently describe the convex hull $\mathcal{Y}$ in terms of a set of inequalities $Fy\leq f$, where $F$ is a matrix and $f$ is a vector. Then you could rewrite your constraints: $$g(x,y) \leq 0 \quad \forall F y \leq y\quad\Longrightarrow\quad \sup_{y:~Fy\leq y} g(x,y) \leq 0$$ For fixed $y$, the $\sup$ is a convex optimization problem in $y$. One thing you can consider is finding the dual of this convex subproblem. This dual will be a minimization in some variables $z$. The precise result depends on the exact nature of $g$, but it is quite possible that you will be able to rewrite your original problem as a joint minimization in $x$ and $z$.