The book Machine Learning a Probabilistic Perspective by Kevin Murphy on page 130 states following fact without proof:
Consider the MLE estimate of covariance matrix $\Sigma_{\text{MLE}}$. The Bayesian based shrinkage estimation is given by $$ \Sigma_{\text{MAP}}(i,j) = \begin{cases} \Sigma_{\text{MLE}}(i,j)\quad &\text{if}\ i=j, \\ (1-\lambda)\Sigma_{\text{MLE}}(i,j) &\text{otherwise.} \end{cases} $$ This will change the eigenvalues but will not affect the eigenvectors. It clearly seems to be not true because for $\lambda = 1$, $\Sigma_{\text{MAP}}$ is a diagonal matrix with eigenvectors given by columns of identity matrix. Has the book got this fact wrong?