Maximize $\displaystyle\sum_{i=1}^4\frac{A_i(1-e^{-k_it_i})}{t_i}$ subject to $t_1 + t_2 + t_3 + t_4 = T$

83 Views Asked by At

I'm working on a personal project and I have the function $$ P(t_1,t_2,t_3,t_4) = \cfrac{A_1(1-e^{-k_1t_1})}{t_1} + \cfrac{A_2(1-e^{-k_2t_2})}{t_2} + \dots +\cfrac{A_4(1-e^{-k_4t_4})}{t_4} $$ where $ A_i $ and $ k_i $ are know nonnegative constants and $ t_1,t_2,t_3,t_4 >0 $. I'm trying to maximize this function under the constraint that $ g(t_1,t_2,t_3,t_4)= t_1+t_2+t_3+t_4 = T $ for some value of $ T >0 $.

I set up the Lagrange multipliers as follows:

$ \nabla P = \lambda\nabla g $

which results in the equations:

$ \cfrac{A_1\left(t_1k_1e^{-k_1t_1} + e^{-k_1t_1} - 1 \right)}{t_1^2} = \lambda $

$ \cfrac{A_2\left(t_2k_2e^{-k_2t_2} + e^{-k_2t_2} - 1 \right)}{t_2^2} = \lambda $

$ \vdots $

$ \cfrac{A_4\left(t_4k_4e^{-k_4t_4} + e^{-k_4t_4} - 1 \right)}{t_4^2} = \lambda $

I tried multiple times to isolate all the $ t_i $'s but was unsuccessful. Is it even possible to isolate it (with the use of some special function, or some trick)? If not how could I solve this problem numerically?

Any help would be greatly appreciated.

1

There are 1 best solutions below

0
On BEST ANSWER

You're trying to solve the following equation (after solving for the exponential term) for $t$:

$$e^{-kt}=\frac{\lambda t^2+A}{A(kt+1)}$$

Such problems appear in Generalizations of the Lambert W function and are not generally solvable in closed form.

Rather than try to solve it analytically, you could solve it numerically instead. There's plenty of tools you can find for constrained optimization problems (you can even use Excel for example).

For some software, you may find softening your constraints by adding them to the objective function (similar to Lagrange multipliers) to be a good idea.

$$\hat P(\vec t)=P(\vec t)-\lambda\left[\left(T-\sum\vec t\right)^2+\|\vec f(\vec t)\|^2\right]$$

$$\vec f(\vec t)_i=\min(t_ie^{-k_it_i},0)$$

The $\lambda$ term serves to represent $T=\sum\vec t$ constraint and $\vec t\ge0$ constraints. By gradually scaling $\lambda\to\infty$, your optimizer will prioritize reaching $\lambda\cdot0$ i.e. matching your constraints before attempting to optimize $P(\vec t)$.

Reducing the number of constraints and parameters by setting $t_4=T-t_1-t_2-t_3$ and changing $t_4>0$ to $t_1+t_2+t_3<T$ may also help.