Summary
For simulation problems, I need to be able to generate large numbers of random lists of numbers, say $x_1, x_2, \dots, x_n$ (where $n \approx 1000$), subject constraints similar to what one would encounter in a linear programming problem. Optimally, I'd like get solutions distributed uniformly along the solution space, but I just wanted to get things working. I needed to generate them quickly, and so the solution I found was to generate a random set of weights, and then use linear programming to find the set of $x$'s that would maximize the sum. However, when I used a program to output the solution, all the results were clustered around one main result. Why is this? Also, would there be a better way to sample?
A more concrete set up is this:
$$ \text{Need to generate points: } x_1, x_2, ... x_{1000} \sim U[(??)]$$
Where the $x$'s satisfy:
$$ 0 \leq x_n \leq .005,$$
for some $C$ and all $n$ and:
$$ \mathbf{E}*\mathbf{x} = \mathbf{F},$$
where $\mathbf{E}$ is a matrix with $k \lt n$ rows whose entries: $e_{mn} \sim N(0,1)$ (not totally random, based on real processes, but essentially) and $\mathbf{F}$ is a feasible solution, set by an original solution to the constraints with 200 $x's$ equal to .005. I am trying to match this original solution. So I solve the following problem:
Let $\mathbf{R} \sim N(0,1).$ Maximize $\mathbf{R*x}$ subject to the preceding constraints.
However, I get "random" solutions that are clustered extremely close to the original solution, within an average euclidean distance of about $10^{-14}$. Any idea why this would be?
My thoughts
I've been researching the theory behind linear programming and I've realized that if I use my original solution as my "basic feasible solution" after introducing slack variables and such, I get a constraint of the form (using the format of this answer):
$$\mathbf{A}x = \begin{pmatrix} \mathbf{0} & \mathbf{N_1} \\ \mathbf{I} & \mathbf{N_2} \end{pmatrix}\begin{pmatrix} x_B \\ x_N\end{pmatrix} = \begin{pmatrix} \mathbf{0} \\ \mathbf{.005} \end{pmatrix}$$
Where $\mathbf{I}$ is the $n \times n $ identity matrix, $\mathbf{0}$ is $k \times n$, ect. I find that we arrive at the following two equations:
$$ \mathbf{N_1}x_N = 0 $$
$$ \text{and} $$
$$ x_B + \mathbf{N_2}x_N = (.005)$$
The bottom is a standard constraint for linear programming, but the top is extra. Is this what is squeezing my solutions down?