Assignment Problem that maximizes norm of sum of vectors

420 Views Asked by At

I have the following assignment problem

\begin{align} \text{Maximize: }\quad &\left|\left|\sum_{i=1}^{n}\sum_{j=1}^{n} \boldsymbol{v}_{ij} x_{ij}\right|\right|_2^2\\ \text{such that: }\quad &\sum_{j=1}^n x_{ij} = 1, \quad \forall i = 1, \ldots,n\\ \quad &\sum_{i=1}^n x_{ij} = 1, \quad \forall j = 1, \ldots,n\\ &x_{ij}\in \{0,1\} \end{align}

Here $\boldsymbol{v}_{ij}$ are vectors corresponding to the assignment of $i$ to $j$, with coordinates $(\boldsymbol a_i \cdot \boldsymbol b_j, ||\boldsymbol a_i \times \boldsymbol b_j||_2)$, where $\boldsymbol a_i$ and $\boldsymbol b_j$ are arbitrary vectors in $\mathbb R^2$. How can we solve the problem efficiently? If it is NP-hard how can we approach an approximation algorithm.

Notes:

  1. If the objective function were linear then the problem would have been a standard Linear Sum Assignment Problem (LSAP) with a complexity of $\mathcal O(n^3)$.
  2. The objective function is quadratic in the variables $x_{ij}$ and so it is a Quadratic Assignment Problem (QAP). The QAP is NP-hard in general. However, there might be a some structure in this problem that makes it solvable in polynomial time or an efficient approximation algorithm.
  3. Maximizing the Magnitude of the Resultant Vector is for the problem of finding subset of the vectors that maximize the norm. The solution is based upon arranging the vectors based on the angle and proving that the vectors in the subset need to be contiguous. Wondering if such an analysis could be helpful here.
  4. A $\sqrt{n}$ approximation algorithm for the general QAP is given in 1.
  5. 2 gives a general description of QAP. However, it deals with the minimization problem.
  6. A different way of describing the objective function for QAP is given in 3. $$ \text{Maximize} \sum_{i, j \in [n], i\neq j} w_{ij} d_{\pi(i)\pi(j)} $$ Here $\pi: [n]\mapsto [n]$ is the permutation for the assignment. $W=(w_{ij})$ and $D=(d_{\pi(i)\pi(j)})$ are symmetric non-negative matrices. I am not sure if our problem can be converted to this form. They also state that almost 3.16 approximation can be obtained when triangle inequality holds for either of the matrices. I am not sure if that's the case as I am not able to properly convert the problem in the given form.

  7. An upper bound for the problem can be obtained by using the following objective function: Maximize $\sum_{i=1}^{n}\sum_{j=1}^n ||\boldsymbol v_{ij}||_2 x_{ij}$, and then taking the square of the value. This upper bound can be found in polynomial time as it is a standard linear sum assignment problem. However, it does not seem to pose any guarantee on the original objective function if the optimal solution for the modified objective function is used. Perhaps the solution can be improved by using note 3.

1

There are 1 best solutions below

3
On

The objective is a norm of a two dimensional vector. My approach would be to look at the problem as a multiobjective problem with two objectives, which are the individual elements of the vector. So, the first objective is $\sum_{i=1}^{n}\sum_{j=1}^{n} (\boldsymbol{v}_{ij})_1 x_{ij}$ and the second objective is $\sum_{i=1}^{n}\sum_{j=1}^{n} (\boldsymbol{v}_{ij})_2 x_{ij}$.

If you maximize a weighted sum of the two objectives, you can recover the Pareto frontier. Let $\boldsymbol{w} \in \mathbb{R}^2$ be a weight vector (just two numbers that sum to 1). The following optimization problem generates a point on the Pareto frontier (unless one of the weights is $0$): \begin{align} \text{Maximize: }\quad &\sum_{i=1}^{n}\sum_{j=1}^{n} \boldsymbol{w}^T \boldsymbol{v}_{ij} x_{ij}\\ \text{such that: }\quad &\sum_{j=1}^n x_{ij} = 1, \quad \forall i = 1, \ldots,n\\ \quad &\sum_{i=1}^n x_{ij} = 1, \quad \forall j = 1, \ldots,n\\ &x_{ij}\in \{0,1\} \end{align}

Solving this linear assignment problem for one $\boldsymbol{w}$ gives you a Pareto Optimal solution with objective value $\sum_{i=1}^{n}\sum_{j=1}^{n} \boldsymbol{v}_{ij} x_{ij} \in \mathbb{R}^2$. By repeatedly solving this problem for varying $\boldsymbol{w}$ and plotting the objective values in the plane, you get the Pareto frontier. If you enter "weighted sum method" and "pareto" in your favorite search engine, you will find a few background reads. The reason I picked the weighted sum method for this problem is because the subproblems are linear assignment problems.

Once you have generated sufficiently many points, you pick the one with the highest norm. Since you know in advance that you will do this, you can use that to vary $\boldsymbol{w}$ in an intelligent way, focusing on the part of the Pareto surface where the optimal solution will be.