(Sorry for the noob question, if this is totally obvious...)
I'm curious about why matrix decomposition methods don't use logs?
EG:
Trying to solve for $u_1, u_2, v_1, v_2$ in the following
$$ \begin{align} \left[ \begin{array}{cc} a&b\\ c&d \end{array} \right] = \left[ \begin{array}{c} u_1\\ u_2 \end{array} \right] . \left[ \begin{array}{c c} v_1 & v_2 \end{array} \right] \end{align} $$
is equivalent solving for them in this system of equations
$$ \begin{array}{ccc} a & = & u_1 . v_1\\ b & = & u_1 . v_2\\ c & = & u_2 . v_1\\ d & = & u_2 . v_2 \end{array} $$
Is it then valid to solve for them using logs to transform the multiplication problem into an additive problem?
$$ \begin{array}{ccc} \log(a) & = & \log(u_1) + \log(v_1)\\ \log(b) & = & \log(u_1) + \log(v_2)\\ \log(c) & = & \log(u_2) + \log(v_1)\\ \log(d) & = & \log(u_2) + \log(v_2) \end{array} $$
in which case solving it reduces to solving
$$ \begin{align} \left[ \begin{array}{c} \log(a)\\ \log(b)\\ \log(c)\\ \log(d) \end{array} \right] = \left[ \begin{array}{cccc} 1&0&1&0\\ 1&0&0&1\\ 0&1&1&0\\ 0&1&0&1 \end{array} \right] . \left[ \begin{array}{c} \log(u_1)\\ \log(u_2)\\ \log(v_1)\\ \log(v_2) \end{array} \right] \end{align} $$
After which, we can apply exp to resolve the original values for $u_1, u_2, v_1, v_2.$
Other than restricting the solution to the space where everything ($a, b, c, d, u_1, u_2, v_1, v_2$) is strictly positive, is there any other reason for not using logs? Perhaps solving this transform is more work than solving the original problem?
Your question is original. This incitates me to give some more precision in the spirit of our remarks, that of @mathreader about the specificity of "rank one approximations" and mine about the fact that the determinant of the matrix (call it $M$) is $0$, a fact that I have deduced from the following linear combination of its columns:
$$\tag{*}C_1 + C_2 - C_3 - C_4=0 \ \ \iff \ \ \begin{pmatrix} \ \ 1\\ \ \ 1\\ -1\\ -1\end{pmatrix} \in Ker(M).$$
We are going to see two or three interesting points of view, and we will come back at the end to relationship (*).
Question : Could it be awaited that the system
$$\tag{1}\begin{align} \left( \begin{array}{c} \log(a)\\ \log(b)\\ \log(c)\\ \log(d) \end{array} \right) =\underbrace{ \left( \begin{array}{cccc} 1&0&1&0\\ 1&0&0&1\\ 0&1&1&0\\ 0&1&0&1 \end{array} \right)}_{M} . \left( \begin{array}{c} \log(u_1)\\ \log(u_2)\\ \log(v_1)\\ \log(v_2) \end{array} \right) \end{align}$$
is not invertible ?
Yes, it could be because, in particular, there is a visible "degree of freedom", the fact that if
$$\tag{2}\begin{align} \left( \begin{array}{cc} a&b\\ c&d \end{array} \right) = \left( \begin{array}{c} u_1\\ u_2 \end{array} \right) . \left( \begin{array}{c c} v_1 & v_2 \end{array} \right) \end{align}$$
is a solution, then
$$\begin{align} \tag{3}\left( \begin{array}{cc} a&b\\ c&d \end{array} \right) = \left( \begin{array}{c} \alpha u_1\\ \alpha u_2 \end{array} \right) . \left( \begin{array}{c c} \tfrac{1}{\alpha}v_1 & \tfrac{1}{\alpha}v_2 \end{array} \right) \end{align}$$
is also a solution for any $\alpha \neq 0$, ruling out the hope to have a unique solution to (1). But, as well, there is no other degree of freedom... How can we be almost sure about that ? Because, beyond the fact that $\det(M)=0$, we have $rank(M)=3$, and a rule of thumb (more than that, in fact) is that the rank drop - here from 4 (full rank) to 3 - is equal to the number of degrees of freedom.
Now, we can come back to the connection with your "log" approach ; take the logarithm of (3), you will get (1) with an added term which is (assuming $\alpha >0$):
$$\begin{pmatrix} \ \ \log(\alpha)\\ \ \ \log(\alpha) \\ -\log(\alpha)\\ -\log(\alpha)\end{pmatrix}=\log(\alpha)\begin{pmatrix}1\\1\\ -1\\ -1\end{pmatrix}$$
We find back the directing vector of the kernel we had met in (*) ! Nothing strange, a solution to a linear problem is always up to adding elements of the kernel...