I was studying the topic of principal component analysis and came across this problem that I was not able to prove.
Consider a data matrix, X and its covariance matrix, S. I know that taking the eigenvector of S that corresponds to the largest eigenvalue of S will provide the best representation of X in a 1D subspace.
However, how can I explicitly show the relationship between the mean-squared error of this projection (onto the 1D subspace) and largest eigenvalue of the covariance matrix, S.