Solving inequality-constrained least-norm problem non-iteratively

661 Views Asked by At

I am looking into the following constrained quadratic program in $x \in \mathcal{R}^4$

$$\begin{array}{ll} \text{minimize} & \| x \|_2^2\\ \text{subject to} & A x = b\\ & x_{\min} \le x_i \le x_{\max}, \quad\forall i \in \{1,2,3,4\} \end{array}$$

where $A \in \mathcal{R}^{3 \times 4}$ and $b \in \mathcal{R}^3$. I would like to solve it without using an iterative quadratic solver.

For now, I am thinking about the following "algorithm":

  1. Use the Moore–Penrose inverse to find optimal $x$.

  2. If any of the constraints are violated, fix one $x_i$ at the boundary value at a time, and find the rest of $x$ by the regular inverse. This is repeated for each $x_i$, resulting in 4 different vectors (if only one side of the constraint is violated). I finally choose the one of these with the lowest norm.

Now I wonder if this seems like a sound strategy? If so, does anyone know if the strategy has a name that I can search for?. If anyone has other suggestions for how to solve the problem without iterative quadratic solvers I appreciate the feedback.

This seems like something that should have been asked before on this site, but I could not find it searching. Sorry if duplicate.

1

There are 1 best solutions below

10
On BEST ANSWER

Assuming the problem has a solution, your strategy is pretty much there, with a couple small tweaks.

  • You might have to recursively apply something like step 2 as you have outlined. Imagine that the optimal solution occurs with two or three $x_i$ fixed at a boundary instead of just one. Worst case, this now has factorial time complexity in the dimension of $x$ rather than linear. That isn't bad in $4$ dimensions like you have.
  • Since you have $4$ coordinates and $2$ boundaries each to worry about, you really have $2^4=16$ vectors to consider even in a non-recursive version of step 2 (since you don't know if it's the upper bound or the lower bound that you care about).

The problem you have specified though looks familiar enough that dealing with the combinatorial explosion at the boundary is probably already solved with a simpler algorithm. For example, applying the method of Lagrange Multipliers you can conclude that there is some vector $\lambda$ so that $x=\frac12A^T\lambda$, hence $AA^T\lambda=2b$. That won't really help if the optimal solution is outside your range of interest, but symmetric linear solvers are typically quite a bit faster than more general purpose tools.

Other thoughts that come to mind, if the constraint $Ax=b$ is tight then with probability $1$ (with most common distributions of entries for $A$ and $b$) the system has the maximum possible rank (in your case, a $3\times4$ matrix has maximal rank $3$). If the system were square then by definition, it would minimize $\|x\|_2^2$ (since it's the only solution, so it's the minimum across all solutions), and you would just have to hope that it satisfies the remaining inequalities.

Update: Your comment is interesting. In the event that you don't care about the general problem and are definitely working with a $3\times4$ matrix with maximal rank there is quite a bit more we can say about the system.

First, that means the general solution to the system of equations can be represented by $\lambda v_h+v_p$, where $v_h$ is any nonzero solution to the homogeneous equation $Ax=0$ and $v_p$ is any solution to the particular equation $Ax=b$. Note that $v_h$ is any nonzero right singular vector corresponding to the singular value $0$ and that any major linear algebra libraries can compute those extremely efficiently. Additionally, finding a single solution to $Ax=b$ is also implemented in those libraries and very fast.

All that said, now we have the reduced problem of minimizing $\|\lambda v_h+v_p\|_2^2$ where $v_h$ and $v_p$ are both known and $\lambda$ is just a scalar, still subject to $m\leq\lambda v_h+v_p\leq M$ (coordinate-wise inequality) for some vectors $m$ and $M$. That system of inequalities can quickly be turned into a single lower bound and a single upper bound for $\lambda$. It might be possible that the inequalities can't all be satisfied, and this will be the step where you determine satisfiability of your equations.

Let $v_{hi}$ and $v_{pi}$ denote the $i$-th coordinate of the homogeneous and particular solutions, respectively. Differentiating with respect to $\lambda$ and setting equal to $0$ we conclude $$\lambda=-\frac{\sum v_{pi}}{\sum v_{hi}}$$.

From there, just check the upper and lower bounds for $\lambda$ for any smaller values (might not be necessary -- I'm a little tired to think about convexity and whatnot) or if the previously computed value falls outside the interval of interest.