Does analytical solution exist for this convex euclidean affine projection problem with non-negativity constraints?

360 Views Asked by At

I've come across this convex optimization problem in my research where I need to project a matrix $X_0$ onto a non-negative affine space constraint and box constraints. Concretely,

$X \in \mathbb{R}^{m \times n}, ~ A \in \mathbb{R}^{q \times m}, C > 0, ~ m > q >> n > 1$.

$\displaystyle \min_X || X - X_0 ||_F$, subject to. $~ A X \geq \mathbf{0}_{q\times n}, ~0 \leq X_{ij} \leq C ~~~\forall i,j$.

Here, C is a constant. What would be the most efficient way to solve this problem? First, does analytical solution already exist? Boyd's book section 8.1.1 discusses similar problems but it's restricted to half spaces, box constraints, and cones. Is this problem reducible to a known problem with analytical solution?

[Edit per @copper.hat] Yes the matrix has a special structure. I've writing the original formulation to describe the structure of the matrix $A$. So the matrix has only 1, 0, or -1 with some structure which could be utilized.

$\displaystyle \min_{B,Z} || B - B_0 ||_F + ||Z - Z_0||_F$, subject to. $B_{i,j} > Z^{i}_{k,j} ~~\forall k=1,\cdots,p, ~B_{i,j} \leq C, ~Z^{i}_{k,j}\geq 0$

$B \in \mathbb{R}^{v\times n}, ~Z^{i} \in \mathbb{R}^{p \times n}, ~Z = [Z^{1}; \cdots; Z^{v}] \in \mathbb{R}^{v p \times n}$.

Does this parameterization make the problem more solvable? Would this original formulation have an analytic solution?

1

There are 1 best solutions below

2
On BEST ANSWER

There is no analytical solution to this problem for general $m$, $n$, $q$, $A$, $C$.

It is, however, easily converted to a form that is readily solved numerically. A potential conceptual block is that the objective and constraints involve functions on matrices, but it is not difficult to see around those using Kronecker products: $$\begin{array}{ll} \text{minimize} & \|\mathop{\textrm{vec}}(X-X_0)\|_2 \\ \text{subject to} & (I\otimes A) \mathop{\textrm{vec}}(X) \geq 0 \\ & 0 \leq \mathop{\textrm{vec}}(X) \leq \mathop{\textrm{vec}}(C) \end{array}$$ Henceforth, let's use an bar as shorthand for $\mathop{\textrm{vec}}$, and define $\tilde{A}\triangleq (I\otimes A)$, so that $$\begin{array}{ll} \text{minimize} & \|\bar{X}-\bar{X}_0\|_2 \\ \text{subject to} & \tilde{A} \bar{X} \geq 0 \\ & 0 \leq \bar{X} \leq \bar{C} \end{array}$$ From here we then have a couple of options. One is to square the objective, which will not change the optimal point: $$\begin{array}{ll} \text{minimize} & \|\bar{X}-\bar{X}_0\|_2^2 = \bar{X}^T \bar{X} - (2\bar{X_0})^T \bar{X} + \bar{X}_0^T\bar{X}_0 \\ \text{subject to} & \tilde{A} \bar{X} \geq 0 \\ & 0 \leq \bar{X} \leq \bar{C} \end{array}$$ This is a simple convex quadratic program. There are a variety of tools available to solve this problem.

Alternatively, we can linearize the objective: $$\begin{array}{ll} \text{minimize} & y \\ \text{subject to} & \| \bar{X} - \bar{X}_0 \|_2 \leq y \\ & \tilde{A} \bar{X} \geq 0 \\ & 0 \leq \bar{X} \leq \bar{C} \end{array}$$ Now this is in the form of a second-order cone program. Again, there are a variety of solvers available for problems of such type.

To be fair, it can be tedious and error-prone to convert this problem to the very specific standard forms solvers expect. That is where a modeling framework can help. For instance, my MATLAB-based framework CVX can express this problem naturally:

cvx_begin
    variable X(m,n)
    minimize( norm(X-X0,'Fro') )
    A * X >= 0
    0 <= X <= C
cvx_end

No need to mess with Kronecker products at all.