Principal Component Analysis in Face Recognition - number of Eigenvalues?

902 Views Asked by At

I'm at the beginning of learning about the PCA as it is applied in the field of face recognition (Eigenface algorithm) and I came about the following question:

"You're using a training set of 80 images (150x150 pixels). After visualizing the Eigenvalues you decide to keep 40% of the Eigenvectors. What dimension do the resulting projected images (template) have?"

Now, since I think the number of Eigenvalues calculated from a data set is equal to the number of dimensions of that data set, I'd say you'd get images with 9000 dimensions because the dimension of the training images is 150x150=22500 and I'd keep 40% of those.

So is this assumtion correct? Or does the number of Eigenvalues differ from the dimension of the input images ?

Thank you, if you need clarification on the question, just ask.

3

There are 3 best solutions below

3
On BEST ANSWER

Yes. That's correct. Although 40% of eigenvectors is not very meaningful. It's better to talk about how much variance is captured, i.e., the proportion of the sum of eigenvalues.

See Eckart–Young theorem to understand how eigenvalues related to reconstruction error.

1
On

You might want to consider reading this paper. I found it very helpful when I was first introduced to PCA.

http://www.snl.salk.edu/~shlens/pca.pdf

0
On

Not sure but I think the number of eigenvalues or eigenvectors you get is dependent on the number of images so it would be 0.4 * 80 = 32 of them. The images are "flattened" so you do the covariance and then eigenvector/eigenvalue calcultions on an array that has 80 rows and 150*150 columns so you end up with a square symmetric covariance matrix that is 80 by 80, then do the eigen calculations on that matrix to get a maximum of 80 eigenvectors/eigenvalues.