problem description:
Given is:
- set of $m$ arbitrary real value $n$-dimensional vectors $\vec{a}_j$; $m$ can be both larger or lower than $n$; so matrix $\matrix{A}$ composed of the vectors $\vec{a}_j$ does not have to be square.
- cost function $E(\vec{x}) = \sum_i c_i x_i $ where $\vec{c}$ is again some arbitrary $n$-dimensional vector. Basically $E = (\vec{c}|\vec{x})$ where where $(|)$ means dot product.
- Want to find $m$-dimensinal real value vector of coefficients $\vec{f}$ such that $\vec{x}_{MAX} = \vec{b} + \sum_j f_j \vec{a}_j $
- $\vec{b}$ is some $n$-dimensional vector of possitive real values
basically this is simply maximization of expression:
E = $( \vec{c} | \vec{b} + \matrix{A}\vec{f} )$
what is tricky are the constrains:
- $\forall i : x_i > 0.0 $ ( all components of $\vec{x}_{MAX}$ have to be positive )
- $\forall j : 0.0< f_j<1.0 \ $ ( all components of $\vec{f}$ are between between 0.0 and 1.0 )
I know that this is probably some basic problem of linear programming, but since I don't know much about linear programming I do not know what to search for exactly. What I found up to now was different kinds of systems of linear inequalities.
Real world motivation:
It is optimization of some simplistic model of production. Where $n$ commodities can be transformed by $m$ processes between each other. The transformations are described by vectors $\vec{a}_j$ where $a_{ij}$ is the change of amount of $i$-th commodity by $j$-th process ( it can be both negative and positive depending whether the commodity is consumed or produced by the process )
Each commodity has cost $c_i$. We want maximize cost of products after the transformation, by choosing the degree ($0.0< f_j<1.0$) at which we should stop the $j$-th transformation.
In summary, the problem reduces to minimizing, for a fixed vector $r:=A^Tc \in \mathbb R^m$, the linear function $f \mapsto r^Tf$ in the interior of the hypercube $[0,1]^m$.
Strict inequalities are not quite good for in the business of LPs (for example existence of solutions becomes tricky). If your practical application allows you the luxury of relaxing the constraints on ff to something like $\delta_j \le f_j \le 1 - \gamma_j$ $\forall j$ (where the $\delta_j$'s and $\gamma_j$'s are tiny nonnegative constants), then your problem becomes a standard LP, that can be attacked by standard solution techniques. Practically, you could use scipy's optimize.linprog solver.