Let us suppose that we want to represent a matrix $V$ as a product of two matrices $W$ and $H$ (i.e., $V = W H$).
The matrices $W$ and $H$ can be found by the following iterative procedure:
while True:
# update the h matrix
hn = transpose(w)*v
hd = transpose(w)*w*h
h = matrix(array(h)*array(hn)/array(hd))
# update the w matrix
wn = v*transpose(h)
wd = w*h*transpose(h)
w = matrix(array(w)*array(wn)/array(wd))
where matrices are multiplied as matrices and multiplication of arrays is the element-wise multiplication.
I have the following question: Can this iterative procedure be generalized to the case in which the input matrix $V$ (the one that has to be factorized) has missing values?
I also would like to understand what is the intuition behind this procedure. Why does it work?
ADDED
In general case the matrix that is needed to be factorized is not necessary square or symmetric (although it might be). The only restriction to the factorized matrix is that it does not contain negative values. As a result the W and H matrices should not contain negative values (it is why it is called non-negative matrix factorization).
One of the examples where the above described procedure has been implemented is here.
ADDED 2
Here is the original paper about iterative factorization of matrices.