Using Gurobi, I am trying to solve the following LP
$$\text{minimize} \sum_{i=1}^d r_i \\ \text{subject to } x^TV - r = 0 \\ -1 \le x_j \le 1 \text{ for all } 1 \le j \le n $$
Here, $V$ is a set of vectors with $V_i \in \mathbb{R}^d$ for all $1 \le i \le n$, $x$ is the vector of decision variables and $r \in \mathbb{R}^d$ is a slack variable. In the solver, the bounds of the free variable $x_j$ can be specified directly. Normally, I would first split $x_j$ into $x_j^{+}, x_j^{-}$ and subsequently substitute $x_j$ by $x_j^{+} - x_j^{-}$ to get the LP into standard form.
While the above LP gives the expected solution to the trivial input $$V = \{(1,1), (1,1)\}$$ (sorry for the notation, not sure how to describe $2$ vectors $V_1, V_2$ in MathJax)
namely $$x_1 = -1, x_2 = 1, r = (0,0,0)$$
I wonder whether it is at all possible to constrain the solution such that it will try to find a (possibly large) $\sum_{i=1}^d r_i$ when the number of vectors $n$ and dimensionality of the vectors $d$ are large?
In the above formulation, a valid solution can of course just be $x_j = 0$ for all $1 \le j \le n$ whenever it is not possible to give a solution with nonzero $x_j$. This since there is no constraint preventing this from happening. Moreover, another valid solution would be $x_j = 1$, $x_1,..,x_{j-1} = x_{j+1},..,x_n = 0$ and $r = S_j$. Can these be prevented? It does not seem possible to add a constraint on $\sum_{j=1}^n x_j$ for example, given that $x_j$ is a free variable.
Can it be shown that it is not possible to formulate this problem as an LP? Or am I missing some constraint that prevents the solver from coming up with the aforementioned non-desired solutions?
Edit for clarification: So, essentially what I’m trying to do is find the weights (ie. decision variables $x_j$ with $1 \le j \le n$) that, multiplied by the vectors $V_j$, will be such that $x_1V_1 + .. + x_nV_n = 0$ (this is part of the constraint I wrote in the question earlier). For every $x_j$, it holds that $-1 \le x_j \le 1$. Since the equilength vectors $V_j \in V$ are of length $d$, this will translate into $d$ constraints of this form, one for each dimension.
Now, since satisfying this constraint is not (always) possible, I add an “error”/residual term $r$, which is also a vector of length $d$. Then, the objective is to minimize the sum of $r$’s values, ie. to minimize the error vector. Another example input would be $V = \{(1,1,1), (2,1,1)\}$, here it could give $x_1 = 1, x_2 = -1$ with $r = (1, 0, 0)$ for example.
However, while for the trivial input I gave this seems to work just fine, for larger $n$ and $d$ (for example, taking a set of vectors from a real dataset) it no longer works. There, the model will give objective 0.0 and $x_j = 0$ for all $1 \le j \le n$ or it will just set $x_j = 1$ for one of the vectors $V_j$ and give $r = V_j$. This is expected, since there is no constraint preventing these solutions from being valid solutions.
Thus, I wonder if it’s possible to add a constraint that will give me the desired solution (ie. a set of weights $x_j$ and a small “error” vector $r$)? Or if it's not possible to model this as an LP and if so, if/how this can be shown?
For a constraint, I thought of constraining the sum of the weights/decision variables $x_j$ to be nonzero, but this doesn't really work.