I already have an algorithm that solves this problem but I'm looking for ways to make it faster.
I'm working on a meal planner. I have a collection of foods; each food is represented as a vector of its nutrient components (e.g. a hamburger might be [500 calories, 30 g protein, 30 g fat, ...]). Let's say the foods are arranged into a matrix $F$ where $0 ≤ F$ so that if $x$ is the quantities of the meals, $Fx$ is the nutrient vector of those meals combined. The number of foods and the number of nutrients are different but generally both in the 20-30 range.
I then also have a minimum nutrient vector $m$ and maximum vector $M$ where $0 ≤ m ≤ M$. The algorithm finds coefficients $0 ≤ x$ to multiply each food by such that
$m ≤ Fx ≤ M$
My algorithm right now basically starts with $x$ as a $0$-vector, then one by one randomly fills in each coefficient. If we have $m ≤ Fx ≤ M$ we have a solution, but if any component of $Fx$ goes past $M$ or we run out of coefficients to fill we abort and go to the next one. I wrote a version in C that usually takes <10 seconds and about 4 million to 10 million iterations to find a solution.
Ideally I'd have a way to find the boundaries of the solution space $X = \{0 ≤ x \wedge m ≤ Fx ≤ M : x \in \mathbb{R}^\text{no. foods} \}$. Presumably $X$ is a convex polytope, so if I could find the convex hull or a small cuboid entirely containing it it would be quick to sample from.
Is linear programming enough for this problem? I've been looking into conical combinations and convex optimization but haven't found much that could help me. It's not too hard to write a program that quickly finds a solution, the harder part is doing the uniform sample from $X$.
Also I did try some of scipy's optimization methods like scipy.optimize.lsq_linear and scipy.optimize.nnls which seemed perfect but unfortunately it generates solutions outside of my constraints.
Also if it matters, $x$ will typically have an upper bound like $0 \le x \le b$.
If I understand correctly, you have a polytope
$$\mathcal P := \{ \mathrm x \in \mathbb R^d \mid \mathrm A \mathrm x \leq \mathrm b \}$$
defined by the intersection of half-spaces — which is usually called an $\mathcal H$-polytope — and you want to sample uniformly over this polytope. This is only possible if the polytope is bounded, of course. Hence, checking boundedness would be the very first step.
Naive algorithm
Solve $2d$ linear programs of the form
$$\begin{array}{ll} \text{minimize} & \pm \, \mathrm e_i^T \mathrm x\\ \text{subject to} & \mathrm A \mathrm x \leq \mathrm b\end{array}$$
where the $\mathrm e_i$'s are the elements of the standard basis, in order to find a tight "box" that contains $\mathcal P$. It's easy to sample over this box. And it's not that expensive to check whether a sample that is inside the box is also inside the polytope. If it is not, discard it. If the polytope is a lot "smaller" than the box, this is horribly inefficient, as most candidate samples will be discarded.
Less naive algorithm
You have an $\mathcal H$-polytope. It would be nice to have a $\mathcal V$-polytope, i.e., one defined by the convex hull of its vertices. This is the vertex enumeration problem. Take a look at Kaibel & Pfetsch's Some Algorithmic Problems in Polytope Theory [pdf]. If you can find the vertices of the polytope, then you can sample the polytope by generating random convex combinations of the vertices. Hence, instead of sampling the polytope, you just sample the standard $(d-1)$-simplex.
Smart algorithms
Take a look at Lovász & Vempala's Hit-and-Run from a Corner [pdf] and the references therein.