Consider a vector ${\bf x}$ and a matrix $A_0$ with $A_0(i,j)\ge 0$. What is the best way of getting matrix $A$ s.t.
$$A = \arg \min \|A-A_0\|_{\text F}$$
subject to
$$A{\bf x} = \lambda {\bf x} \hspace{2mm} \mbox{and} \hspace{2mm} A(i,j)\ge0$$
where $\|\cdot\|_{\text F}$ denotes the Frobenius norm?
The matrix structure hides the fact that this is a relatively simple convex optimization problem. Let $\mathop{\textbf{vec}}$ be the operator that "stacks" the columns of a matrix into a tall vector. Then this problem is
\begin{array}{ll} \text{minimize}_{a,\lambda} & \| a - a_0 \|_2 \\ \text{subject to} & (x^T \otimes I) a - \lambda x = 0 \\ & a \geq 0 \end{array} where $a \triangleq \mathop{\textbf{vec}}(A)$ and $a_0 \triangleq \mathop{\textbf{vec}}(A_0)$, and $\otimes$ is the Kronecker product. This is easily cast as either a second-order cone program or a quadratic program. The objective function is convex in $A$, and the constraints are linear in $(A,\lambda)$.
If you use a modeling framework like CVX (disclaimer: mine) or YALMIP (not mine!) then you don't need to mess with the $\mathop{\textbf{vec}}$ operator or Kronecker products, because it will do that for you. In CVX, the model is
EDIT: Another nice thing about using a framework like this is that if you decide you'd be interested in minimizing the spectral norm (i.e., the maximum singular value) of $A-A_0$ instead, you just change the objective to
norm(A-A0). You can even minimize the nuclear norm (i.e., the sum of the singular values) withnorm_nuc(A-A0)!For both of these alternatives, the problem must be recast as a semidefinite program, and not in a straightforward manner; and it will be slower. But you can remain blissfully unaware of how the proverbial sausage is made, and let the framework do it for you. YALMIP can do all of these things as well.