Let's say I have the matrices
$A = B \cdot C$
The matrices $A$ and $C$ are given, they contain real values $\in[0,1]$. I want to calculate the matrix $B$, which values can only be binary, $\in\{0$, $1$}, so
$B = A \cdot C^{-1}$
is not an option. Additionally I might need to place constraints on rows or columns of $B$, such that the sum of a row/column must be $1$.
I realize that this might not be exactly possible and I don't mind - it is sufficient to reduce the sum of squared error of $A - B \cdot C$ as much as possible. What method or type of method do I did to use to solve this kind of problem.
Regarding the sizes of the matrices:
- $A$ is a $c$ x $o$ matrix
- $B$ in a $c$ x $c$ matrix
- $C$ is a $c$ x $o$ matrix
where $c$ is somewhere between 1 and 100, and $o$ can be very large, let's say between a five hundred and a million or more. Brute force is not efficient as there exist potentially $2^{100\cdot100}$ variants of $B$.
(It is not exactly matrix factorization, is it? Is it some special kind of matrix-decomposition? It looks to me like a problem of solving a system of linear equations, but I am not sure. I feel like I fail to see the obvious.)