I'm not an expert in optimization, but I am currently working on a problem where I need to maximize/minimize a function of the form, \begin{equation*} g(\alpha_0, \alpha_1) = \displaystyle \sum_{i=1}^N c_i f(\alpha_0 + \alpha_1 d_i) \\ a \leq f(\alpha_0 + \alpha_1 d_i) \leq b ~ \forall \, i = 1,2,...,N \end{equation*} where $c,\,d\in \mathbb{R}^N$ and $a,\,b \in \mathbb{R}$ are all fixed constants.
In this problem, I have some flexibility in the choice of $f(\cdot)$. I know that this reduces to a linear optimization problem when $f(\cdot)$ is linear, but I'm wondering if there are other classes of functions for which this is solvable? Or methods of function approximation that would allow me to get close to the optimum?
One function that is of particular interest to me is, \begin{equation*} f(\alpha_0 + \alpha_1 d_i) = 1 + \exp(\alpha_0 + \alpha_1 d_i) \end{equation*} This is a convex function by itself, but $c_i f(\alpha_0 + \alpha_1 d_i)$ could be convex or concave depending on the sign of $c_i$, so my understanding is that this is not solvable.
Directing me to relevant literature/textbooks would be especially appreciated! Thank you.
First, the constraints are effectively linear as $f$ is monotonically increasing, you've just written them in a bad form.
Since you only have two variables, and nice nonlinearties, even a global solver will solve this problem rather quickly if $N$ is modest. In the code below, I throw it to the simple spatial branch-and-bound global solver in YALMIP (disclaimer, developed by me), and it is solved in a couple of seconds for $N=50$
Also, for all the examples I tried, a local solver found the global solution too.