I have the optimization problem
$$ \begin{align} \min_{\mathbf{P} \geq 0} \|\mathbf{A\odot(X-PQ^\top)}\|^2 + \frac{\|\mathbf{P}\|^2}{2} \end{align} $$ $\odot$ is the Hadamard product, $\mathbf{A,X,P,Q}$ are matrices and $\|\cdot\|^2$ is the Frobenius norm.
I am trying to solve this problem. My approach is based on Lagrangian and taking the gradient.
Gradient of the Lagrangian gives me, \begin{align} \mathbf{A\odot(X-PQ^\top)} + \mathbf{P} = \Lambda,\, \Lambda \text{ is the dual variable} \end{align}
Now using the fact that Λ≥0, can I frame the above problem as an LP as given below
\begin{align} \begin{split} &minimize ~~~0 \\ \text{ such that} \quad &\mathbf{A\odot(X-PQ^\top)} + \mathbf{P} \geq 0 \end{split} \end{align}
Is this LP formulation correct ?
Consider the following set of definitions $$\eqalign{ \def\l{\lambda} \def\L{\left} \def\R{\right} \def\b{\big} \def\B{\Big} &\l^2 = \tfrac12 \\ &a = {\rm vec}(A),\quad \;\;p = {\rm vec}(P),\quad x = {\rm vec}(X)\\ &D={\rm Diag}(a),\quad b=Dx,\qquad\; H=D(Q\otimes I) \\ &F = A\odot(PQ^T-X) \\ &f = {\rm vec}(F) = Hp-b \\ }$$ Vectorization does not change the value of the Frobenius norm, e.g. $$\eqalign{ \b\|A\b\|_F^2 \;=\; \b\|a\b\|_F^2 \\ }$$ The objective function can be reformulated as $$\eqalign{ \def\H{\widehat H} \def\q{\overline b_\ } \phi(p) &= \b\|F\b\|_F^2 \qquad\;\;+\; \,\tfrac12\,\b\|P\b\|_F^2 \\ &= \b\|Hp-b\b\|_F^2 \;+\; \l^2\b\|p\b\|_F^2 \\ &= \L\| \pmatrix{H \\ {\l} I}p\;-\;\pmatrix{b\\0} \R\|_F^2 \\ &= \L\| \H p - \q \R\|_F^2 \\ }$$ Thus the problem becomes $$\eqalign{ &\min_{p}&\L\| \H p - \q \R\|_F^2 \\ &{\rm s.t.}\;\;&p\ge0 \\ }$$ which is known as the Non-Negative Least Squares (NNLS) problem.
Many solvers exist for this, e.g. Matlab's
lsqnonnegor SciPy'sscipy.optimize.nnls