I'm trying to minimize a cost function that is made up of dependent binary variables and continuous variables.
For example the cost function could look like:
$F(x_{0}, x_{1}, x_{2}, r_{0}, r_{1}) = 0.8x_{0} + 0.2x_{0}x_{1} + 0.7 x_{2}r_{1} + 0.4 x_{1}x_{2}r_{1} + 0.3 x_{1} + 0.2 r_{0} + 0.3 x_{1}r_{1}$
subject the constraints that $\sqrt{r_{0}^{2}+r_{1}^{2}}=1$, $r_{i}\in \mathbb{R}$ and $x_{i}\in \{-1,+1\}$.
In general, there can be many $x_{i}$ and $r_{i}$ terms, where $\vec{r}$ needs to be a normalized vector and at most only one component of $\vec{r}$ can contribute to each term in the sum. Whereas, many $x_{i}$ can occur in a term (but without repeats).
What is the best approach to solve this minimization problem?
Currently, I brute force assign all possible binary assignments of the $x_{i}$ variables and then solve the convex problem for $r_{0}$ and $r_{1}$. This works in small cases, but clearly will not scale.
Could a Second-order cone program be used? But this would require linearization techniques, which seems possible (Nonlinear Binary Programming)