I recently learned about PCA and robust PCA. I understand that PCA is identifying the principal components by finding the eigenvectors of the covariance matrix (which of course contains information about the "directionality" of the data). These principal components are then used to transform the data so that it is reoriented along the principal components (lets call this reoriented data matrix $D$). I also understand that robust-PCA can be written as a convex relaxation problem whereby solving it - using Lagrangian methods - yields a low rank matrix $L$ and a sparse matrix $S$.
Is it correct to say this low rank matrix L would be the analogue to the $D$ matrix I described earlier? To find what the principal components are in robust-PCA, would I simply take the eigenvectors of the low rank matrix $L$? Lastly, are robust PCA and PCA titled as such because they both will allow you to produce low rank representations of data, not because there is any algorithmic similarity in how they are performed?