For the purpose of computing principal components of a dataset, represented as matrix $X$ of dimensions $n \times p$ with $n$ samples and $p$ features, we can compute sample covariance matrix $S$, and compute its eigenvalue decomposition: $$ S = Q^t D Q $$ The principal components are then given by $Z = X \cdot Q$.
An alternative method is to use sample correlation matrix: $\hat{S} = \sigma^{-1} S \sigma^{-1} $, where $\sigma = \mathrm{diag}\left(\sigma_1, \cdots, \sigma_p\right)$ is the diagonal matrix of sample standard deviations: $$ \hat{S} = \hat{Q}^t \hat{D} \hat{Q} $$ The resulting principal components $\hat{Z} = X \cdot \hat{Q}$ are different, but span the same vector space, hence there exists a matrix $T$, such that $Z = \hat{Z} \cdot T$.
For reasons of numerical stability it is preferred to work with the correlation matrix.
I was wondering if it is possible to use $\sigma$, $\hat{D}$ and $\hat{Q}$ to compute $Q$, and avoid doing the eigenvalue decomposition of $S$ recombined from $\sigma$, $\hat{D}$ and $\hat{Q}$ ?