This is in regards to eigenface decomposition with $M$ images of size $N \times N$.
In short: My question lies in the following statement (bottom of page 4 in above link): "If the number of data points in the image space is less than the dimension of the space $(M<N^2)$, there will be only $M-1$, rather than $N^2$, meaningful eigenvectors". Essentially, if there are 30 images, we can simply use the $M \times M$ COV matrix, rather than the $N^2 \times N^2$ COV matrix to find the meaningful eigevectors. I am assuming this is a simple linear algebra result, which I have long forgotten.
Let's suppose we have a matrix A of size 256 x 256 x 30 (i.e. 30 images). As a Casorati matrix the size is now 65536 x 30.
The covariance matrix can be calculated as $C = A A^T$, which results in a matrix of 65536 x 65536, yielding 65536 eigenvectors.
As outlined in the paper linked above (bottom of page 4), we can instead use the following form to find $M = 30 - 1$ meaningful eigenvectors:
$$A \, A^{T} \, A \,\, \vec{v} = \lambda \, A \, \vec{v}$$
such that $A \vec{v}$ are the eigenvectors of $A \, A^T$.
My question lies in the following statement: "If the number of data points in the image space is less than the dimension of the space $(M<N^2)$, there will be only $M-1$, rather than $N^2$, meaningful eigenvectors.
I am having difficulty understanding why the above statement is true. It makes sense, when I think about having $M$ images, but how does this relate to the 65536 eigenvectors that would be calculated using the traditional COV formalism?
I hope my question is clearly stated.
Turns out this is a duplicate, which answers my question. I will leave it to the community to decide whether I should delete or just close the question.