I need to learn low-rank factorization and its application in machine learning and digital image processing. But I have two questions:
Is low-rank factorization another name for low-rank approximation? If the answer is no, what is the main difference between them?
Would you please introduce me several references from which I can learn low-rank factorization?
I would say low-rank factorization is a special case of low-rank approximation.
Low-rank approximation considers approximation of vectors in tensor-product vector spaces by means of sums of elementary tensors, with the goal to have as short as possible sums while having as good an approximation as possible.
Low-rank factorization is the special case where the tensor product is the outer product of vectors (defined by $(a\otimes b)_{ij}:=a_ib_j$ for vectors $a$ and $b$). Note for example, that Truncated Singular Value Decompositions of a matrix $M$ consists in truncating the sum in $M=U\Sigma V=\sum_{j=1}^{n}\sigma_j u_j\otimes v_j$. By omitting those terms with small $\sigma $, you introduce the least error (when measured in appropriate norms)
Summary: The central observation was that multiplication of matrices shaped $(m,n)$ and $(n,k)$ can be viewed 'componentwise' by writing it as sum of $n$ multiplications of rows of length $m$ with columns of length $k$ and that this allows you to think of matrix multiplication as summing up elementary tensors