I have seen two versions of Standard/Classical Principal Component Ananlysis. And I have no idea how they are related:
Version 1 :Wiki.
This is about solving the eigen vectors and eigen values.
The first principal component $ \omega_{1} $ is defined as : $$\omega_{1}=\underset{||\omega||=1}{\mathrm{arg~~ max}}~~Var\{\omega^{T}X\}=\underset{||\omega||=1}{\mathrm{arg~~ max}}~~ E\{(\omega^{T}X)^{2}\} $$ the $k_{th}$ principal component is computed as the following: $$ \hat{X}_{k-1}=X-\sum_{i=1}^{k-1}\omega_{i}\omega_{i}^{T}X $$ $$ \omega_{k}=\underset{||\omega||=1}{\mathrm{arg~~ max}}~~ E\{(\omega^{T}\hat{X}_{k-1})^{2}\} $$
Version 2 :Paper: Robust principal component analysis? by EJ Candes, et al.
this says that if we stack all the data points as column vectors of a matrix M, the matrix should (approximately) have low rank: mathematically,$$ M = L_{0} + N_{0}, $$where $L_{0}$ has low-rank and $N_{0}$ is a small perturbation matrix. Classical Principal Component Analysis (PCA) [Hotelling 1933; Eckart and Young 1936; Jolliffe 1986] seeks the best (in an 2 sense) rank-k estimate of $L_{0}$ by solving $$ \underset{subject~~ to ~~rank(L) ≤ k}{ minimize~~|| M − L||} $$ subject to $rank(L) ≤ k$. (Throughout this article, $||M||$ denotes the 2-norm; that is, the largest singular value of M.)
It will greatly appreciated if anyone could answer my questions!