I am formulating a problem and intend to solve it by optimization.
Here is the current result:
*Objective:*$\quad\min\quad c + f_1(x)x_1 + f_2(x)x_2$
Constraint: $\quad ax_1 + bx_2 <= d$
where $a,b,c,d$ are constant values; $x_1, x_2$ are variables and could only choose from integers; $f_1(x), f_2(x)$ are functions with input as vector $x = (x_1, x_2)$. $f_1(x), f_2(x)$ are not continuous functions.
My current solution is to solve it as linear programming iteratively which means I use the values of $x_1, x_2$ in last run to calculate the output of $f_1(x), f_2(x)$ and use that output as constant coefficients in the optimization objective. The process terminates if consecutive two runs give the same $x_1, x_2$.
The problem here is that I cannot reach a strict proof to show that this process 'will' terminate in the end. Any idea to do that? Another option is to abandon this method. If so, what kind of optimization this problem could be solved by?