How is SVD better than Gaussian elimination in finding the rank of a matrix?

201 Views Asked by At

In Linear Algebra and its Applications, Gilbert Strang, $4^{th}$ ed, one of the applications of SVD is mentioned as finding the effective rank of a matrix.

The idea presented in the book is that the rank of a matrix can be found via Gaussian elimination and counting the non-zero pivots. However, small pivots $\epsilon$ can be misleading, producing the ambiguity if they are really non-zero or actually $0$ with numerical errors.

A more stable measure of the rank is the effective rank which is indicated by the eigenvalues of $A^TA$ or $AA^T$. The eigenvalues - singular values squared - are less misleading. Based on the accuracy of the data, a tolerance limit like $10^{-6}$ is decided and all singular values above it are counted.

My question is that a similar strategy of deciding upon a tolerance limit of say $10^{-3}$ and counting pivots above that can be followed for Gaussian elimination as well. So why are singular values a better or more stable result.