How do I optimize a function subject to a two-part constraint?

374 Views Asked by At

I would like to maximize the following function

$$\max\; U= log(xT_o + (1-x)T_s) + log(Y)$$

by choosing levels of $T_o$, $T_s$, and $Y$, and where $x\in[0:1]$

subject to

$$N = \binom{P_sT_s+Y \quad\quad\quad\quad\quad\quad\quad\quad\quad \text{if} \; T_o=0}{P_sT_s+P_oT_o+P_f+P_a+Y \quad \text{if} \; T_o>0}$$

How do I go about solving this? Looking into economics literature on contingent commodities, I suspect there might be a way to collapse the constraint down into a single equation, but I'm not sure how.

If it helps to have some context, I'm making a simple economic model, and the idea is that there is a utility function that depends on consumption of three goods, $T_o$, $T_s$, and $Y$. There are fixed costs $P_f$ and $P_a$ associated with choosing to consume at least one unit of $T_o$. $P_o$ and $P_s$ are the marginal costs of $T_o$ and $T_s$, respectively. $N$ is the individual's endowment.

1

There are 1 best solutions below

8
On BEST ANSWER

Thanks to the OP for the clarifications. I have a couple of suggestions.

First of all, we know that $T_o \leq T_{\max} \triangleq ( N - P_f - P_a - Y ) / P_s$ (which I assume is positive, otherwise the model is infeasible.) We'll use this later.

One strategy is this to solve two models: the "positive model" that uses the $T_o>0$ constraint, producing a utility value $U_+$; and the "zero model" that uses the $T_o=0$ constraint, producing a utility value $U_0$. Then pick the model that produces the larger utility.

If by chance solving the positive model happens to return $T_o=0$, you can be sure that the zero model will yield a higher utility value. Here's why: let $(0,\bar{T}_s,\bar{Y})$ be the specific values that come out of the positive model. Then the point $(0,\bar{T}_s,\bar{Y}+P_f+P_a)$ will be a feasible solution to the zero model. It will not necessarily be optimal, but it will already have a higher utility value: $$U_{+}\geq\log((1-x)\bar{T}_s)+\log(\bar{Y}+P_f+P_a) \geq \log((1-x)\bar{T}_s)+\log(\bar{Y}) = U_{0}.$$ So if both models suggest $T_o=0$, you can be sure that the zero model will win every time. (If your utility function weren't concave this might not be so.)

Now, if you want to be able to handle more general models, and you happen to have a convex optimization engine around that can handle binary variables, have it solve this model: $$\begin{array}{ll} \text{maximize} & \log(xT_o+(1-x)T_s)+\log Y \\ \text{subject to} & N = P_sT_s + P_oT_o + B(P_f+P_a) + Y \\ & T_s, T_o \geq 0 \\ & T_o \leq B T_{\max} \\ & B\in\{0,1\} \end{array}$$ To see why this works, note that if $B=0$, then the $T_o\leq BT_{\max}$ constraint forces $T_o=0$, and it also removes the fixed costs from the budget constraint. If $B=1$, then $T_o\leq T_{\max}$ lets $T_o$ take on any sensible value, but it adds the fixed cost back in.

Now, what a solver is likely to do here is to first solve the problem by replacing $B\in\{0,1\}$ with $0\leq B \leq 1$. If the optimal solution to this relaxed problem has either $B=0$ or $B=1$, then it will stop, knowing that it has the right answer to the original as well. Otherwise, it will effectively try both $B=0$ and $B=1$ separately. (For problems with more binary variables, solvers typically do more advanced things with cutting planes etc., but for one variable, I'm almost certain this is how it will go.)

EDITED to add: if you use my software CVX to solve this problem, here is the model I would use:

cvx_begin
   cvx_solver mosek % or gurobi
   variables To Ts Y
   variable B binary
   maximize(geo_mean([x*To+(1-x)*Ts Y]))
   subject to
      N == Ps * Ts + Po * To + B * ( Pf + Pa ) + Y
      0 <= To <= B * ( N − Pf − Pa − Y ) / Ps
      Ts >= 0
cvx_end

This objective, $\sqrt{(xT_o+(1-x)T_s)Y}$, is compatible with the solvers under the hood; and it's related to your original objective by a logarithm and scaling, so it's equivalent.