I am trying to maximize the following problem:
$$ \sum_{t=1}^T{z_t g_t + (1-z_t) h_t(x_t) } $$ with respect to $z_t$ which is a vector of 0 and 1 only and $x_t$ which is a vector of $\mathbb{R}^T$. $h_t$ is a known vector of concave function and $g_t$ is a known vector. The initial state ($t=0$) is perfectly known.
In words, I would like to find the path for $x_t$ but allowing for discrete choice, i.e. if I choose $z_t = 1$, then I do not update the value of $x_t$ and $x_t = x_{t-1}$ but if I choose $z_t=0$ I maximize with respect to $x_t$.
What is the best method to do this? I have found solutions for a full discrete choice model but not for a model that mixes a continuous and discrete variable.
Thanks
T.
Introduce a new variable $y_t$ to represent the summand so that the objective is to maximize $\sum_t y_t$. We want to enforce $$y_t=\begin{cases}g_t &\text{if $z_t=1$}\\h_t(x_t) &\text{if $z_t=0$}\end{cases}$$ You can do this via the following big-M constraints: \begin{equation} (\ell_t-g_t)(1-z_t) \le y_t - g_t \le (u_t-g_t)(1-z_t)\\ (g_t - u_t) z_t \le y_t - h_t(x_t) \le (g_t - \ell_t)z_t,\\ \end{equation} where $\ell_t$ and $u_t$ are (constant) lower and upper bounds on $h_t(x_t)$: $$\ell_t \le h_t(x_t) \le u_t.$$ Now the objective and some constraints are linear. If all $h_t$ are linear functions, then you can solve the whole problem with a MILP solver. Otherwise, use a MINLP solver.