Consider the points $x_1, \ldots, x_N \in \mathbb{R}^n$, and a (locally bounded, convex) function $f: \mathbb{R}^n \rightarrow \mathbb{R}$.
I am looking for the dual of the following optimization problem. $$ \max_{x \in \text{conv}( \{ x_1, \ldots, x_N \} ) } f(x) $$
Comments. I know how to compute the dual in the standard setting. But I am not clear how to get it because of the presence of the convex hull. Notice that I do not want to compute explicitly the hyper-planes passing through the convex hull of the given $x_i$'s.
As you probably know, this is not a convex optimization problem, because you are maximizing a convex function, not minimizing it. As Siamak correctly states, its optimal points, if they exist, lie on the boundary of the convex hull.
But duals exist even for nonconvex problems like this one. The dual is always a convex optimization problem, but it does not necessarily achieve the same optimal value. However, it often provides useful bounds on the primal problem (in this case, upper bounds). Sometimes the dual leads to useful heuristics for solving the primal. And in certain limited cases, strong duality is obtained, even for a nonconvex primal.
The precise form of the dual will depend upon the computational form you provide for the convex hull constraint. For instance, Boris suggesting using this formulation, which is what I would suggest as well: $$\begin{array}{ll} \text{maximize} & f(x) \\ \text{subject to} & \textstyle \sum_{i=1}^N \alpha_i x_i = x \\ & \textstyle \sum_{i=1}^N \alpha_i = 1 \\ & \alpha_i \geq 0, ~i=1,2,\dots,N \end{array}$$ The negative Langrangian (negative because we are maximizing) is $$L_-(x,\alpha,y,v,z)=f(x)+y^T(\textstyle \sum_i \alpha_i x_i-x)+v(\sum_i\alpha_i-1)+\sum_i z_i \alpha_i$$ where $y\in\mathbb{R}^n$ is the Lagrange multiplier for the constraint on $x$, $v\in\mathbb{R}$ is the Lagrange multiplier for the constraint on $\alpha$, and $z\in\mathbb{R}^n_+$ is the Lagrange multiplier for the nonnegativity constraints on $\alpha$. The dual problem is $$\begin{array}{ll} \text{minimize} & \sup_{x,\alpha} L_-(x,\alpha,y,v,z)\\ \text{subject to} & z_i \geq 0, ~i=1,2,\dots,N \end{array}$$ We're technically done here, but we usually simplify a bit by eliminating the primal variables if possible, and extracting any implicit constraints in the process. For instance, for the supremum to be bounded, we must have $$x_i^Ty + v + z_i = 0 \quad\Longrightarrow\quad x_i^Ty+v\leq 0 \qquad (i=1,2,\dots,n)$$ which allows us to eliminate both $\alpha$ and $z$: $$\begin{array}{ll} \text{minimize} & \sup_x f(x)-y^Tx-v\\ \text{subject to} & x_i^Ty+v\leq 0, ~i=1,2,\dots,N \end{array}$$ Two more simplifications: if the optimum exists at least one of the inequalities will be "tight"---that is, $x_i^T y + v = 0$ for at least one $i$. Otherwise, $v$ could be reduced further, contradicting a claim of optimality. So we can eliminate $v$ as well: $$\begin{array}{ll} \text{minimize} & \max_i x_i^T y + \sup_x f(x)-y^Tx \\ \end{array}$$ Finally, note that $\sup_x f(x)-y^Tx$ is precisely the convex conjugate of $f$: $$\begin{array}{ll} \text{minimize} & \max_i x_i^T y + f^*(y) \\ \end{array}$$ So there's the simplified form of our dual. Of course, what this looks like computationally depends on the precise form of $f^*(y)$. It's quite possible that this is difficult or impractical to compute. But if it can be solved, then the value of $x$ that maximizes $\sup_x f(x)-y^Tx$ may actually be a useful initial point for a local search for the primal problem.