I would like to present you two non-convex (so far) optimization problems. My goal is to implement a numerical solution with Python. I'm keen to do some maths if it makes the problems easier to solve numerically.
Problem 1
Let be $v_{1},\dots,v_{n}, v_{\text{ref}} \in \mathbf{R}_{\geq 0}^{d}$. And $r_{i}$ non-negative real for $1\le i \le n$. We are looking for a stochastic matrix $Q$ which maximise the following quantity: $$ \max_{Q} \left( \sum_{j=1}^{n} \min_{c} \left( \frac{\sum_{i=1}^{n} q_{j,i}v_{i} }{v_{\text{ref}}} \right) - \sum_{j=1}^{n} \sum_{i=1}^n q_{j,i} r_{i} \right) $$ with $\min_{c}(v)$ the minimal coordinate of the vector $v$. And divided by $v$ means coordinatewise.
Problem 2
Let be $v_{1},...,v_{n}$ and $v_{ref}$ non negative vectors in $\mathbf{R}^{d}$. And $E_{i}$ non negative real for $1\le i \le n$. We are looking for a vector $q$ which maximise the following quantity:
$$ \max_{q_{1}, ...., q_{n}} \left( \min_{c} \left( \frac{ \sum_{i=1}^{n} q_{i}v_{i} }{v_{\text{ref}}} \right) - \sum_{i=1}^{n} q_{i} r_{i} \right) $$
with the constraint for $1\le i\le n$ we have $0\le q_{i} \le 1$ and $\sum_{i=1}^{n} q_{i} = 1$
Thoughts
As far as I know the issue is $min_{c}$ which makes the objective function non convex. I don't know if it's possible to turn it back into a convex optimization problem.
On internet I have found multiple technics (heuristic optimization (like genetic algorithms, simulated annealing, or particle swarm optimization), convert the problem into a parameter search problem by introducing a new parameter t that lower bounds the objective, ...). However I have never heard of them before, so I decided to create this post just in case someone has already handled such optimization problem and then could help me to stir in the right direction.
Thank you for your help!