The problem is the following
Given $v_1, \, v_2, \, \ldots, \, v_n \in \mathbb R^m$; find $\epsilon_1, \, \epsilon_2, \, \ldots, \, \epsilon_n \in \{0,1\}$ such that $$\left\vert \sum_{i=1}^n (-1)^{\epsilon_i} v_i \right\vert = \max$$ where $\vert \cdot\vert $ denotes the Euclidean norm.
I.e. we want to maximize the length of a sum of vectors, under the restriction that for every vector we must use either the vector itself or $(-1)$ times this vector.
This came from some application and I was asked whether I knew how to obtain a solution some time ago. I didn't see an efficient algorithm and since the number of vectors typically was sufficiently small, it could be solved by simply going through all possibilities.
I thought about it again, today. And the problem really looks like there might be a nice efficient algorithm to solve it - at least better than $\Theta(2^n)$.
Still not being able to find one, I wanted to ask whether someone here could figure out a neat solution!
Something that doesn't work:
- Order the vectors by length: $|v_1|\ge |v_2|\ge \dots \ge |v_n|$
- Having found a maximal vector-combination $v = \sum \pm v_i$ from $v_1, \, v_2, \, \ldots, \, v_k$, choose the sign of $v_{k+1}$ so that it maximizes the length of $v \pm v_{k+1}$.
A counterexample is given by $\begin{pmatrix} 3\\ 2\end{pmatrix},\, \begin{pmatrix} -2\\ 3\end{pmatrix},\, \begin{pmatrix} 2\\ 3\end{pmatrix}$. Combining the first two vectors we get $$\begin{pmatrix} 1\\ 5\end{pmatrix} \quad \text{or}\quad \begin{pmatrix} 5\\ -1\end{pmatrix}$$ both of which have the same length. However, if we choose the first combination we can get $\begin{pmatrix} 3\\ 8\end{pmatrix}$ whereas for the second combination $\begin{pmatrix} 7\\ 2\end{pmatrix}$ is the best we can do.
Feel free to share your thoughts! Thanks. =)
A cool terminology (which is not directly related to the answer): Letting A be an m×n matrix whose columns are given by vectors v1, …, vn, your problem is concisely stated as computing the “subordinate (∞,2)-norm” of matrix A.
This problem is NP-hard, which implies that there is no polynomial-time algorithm unless P=NP. I do not know what the best reference for it is, but one place where it is proved to be NP-hard is [GK89]. Check the problem called ZonMax in [GK89], which is a slight variation of your problem which is easily seen to be equivalent to yours. I think that part of the reduction in [GK89] can be simplified by using the NP-completeness of the subset sum as is done in [MK87].
[GK89] Peter Gritzmann and Victor Klee. On the 0-1 maximization of positive definite quadratic forms. IMA Preprint Series #522, April 1989. http://www.ima.umn.edu/preprints/Jan89Dec89/522.pdf
[MK87] Katta G. Murty and Santosh N. Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Mathematical Programming, 39(2):117–129, June 1987. DOI: 10.1007/BF02592948. (I checked only the technical report version.)