A more carefully worded version of this question, for professional mathematicians, appears here.
Consider a general $n \times n$ matrix ${\bf A}$ with possibly positive and negative entries. Is there an algorithm to write this matrix as a product of two $n \times n$ matrices, i.e., ${\bf A} = {\bf B}{\bf C}$, in which all the entries of ${\bf C}$ are non-negative?
Polar decomposition--in which ${\bf B}$ is unitary and ${\bf C}$ is positive semi-definite Hermitian--is closely related to this question but does not quite solve it. Likewise positive matrix factorization (which assume ${\bf A}$ itself has only non-negative entries) does not quite solve it. Nor does Cholesky decomposition, for related reasons.
Of course, the trivial solution ${\bf B} = {\bf A}$ and ${\bf C} = {\bf I}$ (the identity matrix) will yield such a factorization. So will trivial rescalings of that solution. (However, if all the entries of ${\bf A}$ are non-negative then the algorithm should return the solution ${\bf B} = {\bf I}$ and ${\bf C} = {\bf A}$.)
Hence we need a condition or merit function or constraint to make the problem well-posed. As you can see from the below discussion, the ideal metric is one in which the number of non-zero multiplications is large due to ${\bf C}$ (low computational cost) and which is small due to ${\bf B}$ (high computational cost). In short, the ideal metric is large number of (positive) entries in ${\bf C}$ and small number of (positive and negative) entries in ${\bf B}$.
I do not need an algorithm to find a unique decomposition, just a principled method for finding at least one.
Motivation
Suppose some computational system can perform a matrix multiplication at extremely low cost only if every component matrix multiplication is of two non-negative real numbers. Then if we can put as much of the overall computation of ${\bf A}{\bf x}$ (where ${\bf x}$ is a vector of non-negative entries) into the ${\bf C}{\bf x}$, then the overall cost will be low.