I have a matrix $M$ and I would like to find two vectors $u$ and $v$, that minimize
$$ \sum_{i,j} (M_{i,j}-u_iv_j)^2 $$
How can I do this (numerically)?
Actually this is very simplified compared to my actual problem. For instance, I dont really have a full matrix, but only values for some range of $(i,j)$. However, I think once I know how to solve it in the general case, it should be straightforward to skip certain entries.
First, note that $\Sigma_{ij} (M_{ij}-u_iv_j)^2 = \|M-uv^T\|_F^2$, where $\|\cdot\|_F$ denotes the Frobenius norm of a matrix. If you're doing a Google search that may be helpful.
In the unconstrained case, there is an analytic solution given by the singular value decomposition. That is: let $M=U\Sigma V^T$ be the SVD of $M$, with the singular values arranged in nonincreasing order along the diagonal of $\Sigma$. Then the triple $(\sigma_1,u_1,v_1)$ formed from the first singular value, the first column of $U$, the first column of $V$, respectively, can be used to compute your $u$ and $v$. For instance, $u=\sigma_1 u_1$, $v=v_1$ would work, but really any $u=\alpha\sigma_1 u_1$, $v=\alpha^{-1} v_1$ for $\alpha\neq 0$ will work as well.
In the constrained case, unfortunately, you have a nonconvex optimization problem on your hands. Presumably you have a problem like this: \begin{array}{ll} \text{minimize} & \|M-uv^T\|_F^2 \\ \text{subject to} & L_{ij} \leq M_{ij} \leq U_{ij} \quad \forall i,j \end{array} where $L$ and $U$ are lower and upper bound matrices (possibly with infinite entries where you have missing information). This is an extreme (e.g., rank-1) version of a low-rank approximation problem.
There is a variety of literature about this, but again, it's nonconvex, which means that it's not easy to solve in practice. A common technique is to alternate between minimizing over $u$ (holding $v$ fixed) and minimizing over $v$ (holding $u$ fixed). There is no guarantee this gives you the best solution though.
EDIT: A simpler way to look at the problem where you know only a portion of the elements of $M$ is that you're minimizing only over the indices that you know: that is, $$\text{minimize} ~ \sum_{(i,j)\in\mathcal{I}} (M_{ij}-u_iv_j)^2$$ for some index set $\mathcal{I}$. After all, for the elements you don't know, you might as well just set $M_{ij}=u_iv_j$ after you've determined the $u$ and $v$ you prefer, so they will contribute 0 to the final Frobenius objective. Alas, that doesn't make the problem any easier; it's still nonconvex.