Linear programming, optimization

33 Views Asked by At

I want to solve an optimization problem for $\gamma$

$F(\gamma)=log_2(1+ \frac{(1-Q)hp}{\sigma^2})+Q\gamma ph_r-\gamma p(1-Q)$

s.t : $(\frac{a}{b\gamma}-\frac{c}{d})hp >Q$

I have solved this problem by considering the upper limit for $\gamma$ as :

$\gamma < \frac{a h p}{b(Q+\frac{chp}{d})}$

Since that $\gamma$ should be less than $\frac{a h p}{b(Q+\frac{chp}{d})}$ it is the optimal value for $\gamma$. But I am not sure if it is correct or not.

Your kind help will be appreciated.

1

There are 1 best solutions below

0
On BEST ANSWER

Since the inequality is strict, $\gamma$ can't actually obtain the upper bound $\gamma_\max:=\frac{ahp}{b(Q+\frac{chp}{d})}$ or else the solution would be infeasible. It can only get arbitrarily close. I am also assuming that you have made sure the sign on the constants $h$, $p$, $a$, and $b$ is consistent with the manipulation you have performed to obtain the direction of the inequality $\gamma<\gamma_\max$. Depending on the value of those four constants, perhaps there is in fact a lower bound $\gamma>\gamma_\min$.

Whether there is a solution or not will depend on the value of the constants $Q$, $p$, and $h_r$.

  • If $Qph_r > p(1-Q)$, then if the problem is a minimization problem, then the objective function is unbounded (it can be made arbitrarily small). If it is a maximization problem, then there is no optimum, but the closer you are to $\gamma_\max$ (from the lower size), the better the objective will be.

  • If $Qph_r < p(1-Q)$, then if the problem is a maximization problem, then the objective function is unbounded. If it is a minimization problem, then there is no optimum, but the closer you are to $\gamma_\max$ (from the lower size), the better the objective will be.

  • If $Qph_r = p(1-Q)$ then the objective function is constant (does not depend on $\gamma$) so every feasible $\gamma$ gives a global optimum, regardless of whether the objective is a minimization or maximization.