Suppose I have the following problem:
Maximize: $\quad\quad x_1+x_2+x_3+x_4$
Subject to: $\quad\quad \dfrac{\gamma\;a_1\;x_1}{\gamma\;a_2\;x_4+1}\geq1$,
$\quad\quad\quad\quad\;\;\quad\quad \dfrac{\gamma\;a_3\;x_2}{\gamma\;a_4\;x_3+1}\geq1$,
$\quad\quad\quad\quad\;\;\quad\quad\;x_i\in\{0, 1\}\;\forall\; i\in\{1, 2, 3, 4\}$.
The $a_i\;\forall\;i$ and $\gamma$ are real numbers given as input. Clearly the optimal solution to this problem depends on $a_i\;\forall\;i$ and on $\gamma$. Let say I fix the $a_i\;\forall\;i$, I want to find the optimal value in function of $\gamma$. Can this kind of problem be solved analytically?
More general, if I dispose of the following problem:
Maximize: $\quad\quad f_0(x)$
Subject to: $\quad\quad f_i(x,\gamma)\geq\;1\;\forall\;i\in\{1, \dotsc, n\}$
$\quad\quad\quad\quad\;\;\quad\quad\;x\in\mathbb{R}^n$.
Let us denote the optimal solution by OPT and the optimal vector by $x^*$. Clearly, OPT depends on $\gamma$ and we have OPT$(x^*, \gamma)$.
How to find the OPT$(x^*, \gamma)$ for different $\gamma$?
Can someone please give some reference or some useful works on this kind of parametric problems?
N.B. There is no problem of supposing that $f_i\;\forall\;i$ are all convex or even linear.
First of all: I would argue that what you are asking for is not a function of $x^*$ and $\gamma$, but in fact just $\gamma$. After all, $x^*$ depends on $\gamma$. So in fact, the function is just $\text{OPT}(\gamma)$.
In general, even if you assume linearity or concavity in the $f_i$ functions, you cannot solve this problem analytically. For a given $\gamma$, the value of $\text{OPT}(\gamma)$ requires the solution of the associated optimization problem; and even if it is an LP, this will require a numerical solution. In the most general case, determining the shape of $\text{OPT}(\gamma)$ will simply require solving this optimization problem for a grid of points $\gamma$. And if the model is intractable (e.g., non-convex), well, this is a daunting proposition.
In specific cases, of course, you may indeed be able to do better. For instance, you may be able to use warm start methods to compute $OPT(\gamma_2)$ more quickly given the solution for a nearby value $\gamma_1$. If the functions $f_i$ are linear or concave, you may be able to study the dual problem to find "inflection points" in $OPT(\gamma)$: points where the active set of constraints changes. If you can do this, you could potentially greatly reduce the number of points where you have to solve for $OPT(\gamma)$ from scratch; the remaining points could be interpolations of these.
I'm not sure there is going to be good literature for this problem as you've stated it most generally. But I would encourage you to sic the Google on "continuation methods" and "Pareto curve continuation", and start reading. Pareto curve continuation in particular seems like a particularly relevant related topic, but in that case the $\gamma$ value would appear in the objective, not the constraints.
For your specific problem with four binary variables: I would suggest that an ordered exhaustive search would be your best bet. That is, for a fixed $\gamma$, first try $\sum_i x_i=4$, then try all combinations with $\sum_i x_i=3$, then $\sum_i x_i=2$, etc. until such time as you find a feasible solution, or you exhaust them all, in which case the problem is infeasible. If you are certain that the quantities $\gamma a_2 x_4 + 1$ and $\gamma a_4 x_3 + 1$ are positive, then you can multiply through by those denominators to yield a binary linear program, which will likely be easier to solve. And you may even be able to determine at least potential inflection points by studying the LP dual; I do not know.