What is the true meaning of using SVD in finding null space? I mean why when lots of paper mention the null space,they will do SVD ? what can we know after we do SVD ?
For example : $\mathbf G \in R_r \times R_t $ , and $\mathbf W \in R_t \times R_r$,now i want $\mathbf W \mathbf G \mathbf W = 0$, and i know if $R_r > R_t$,the null space of $\mathbf G$ is $R_r - R_t$,then why should still do SVD to $\mathbf W \mathbf G = 0 ? $
This is an explanation of Arthur's comment: A matrix has a zero singular value for each dimension in it's nullspace.
Consider a matrix $A\in \mathbb{R}^{m\times n}$ and assume we have its SVD: $$ A = U \Sigma V^T = [u_1\; \cdots \;u_m] \, \Sigma\, \begin{bmatrix} v_1^T \\ \vdots \\ v_n^T \end{bmatrix}. $$ Note that $\{v_1,\ldots, v_n\}$ forms an orthonormal basis of $\mathbb{R}^n$. We have $$ A v_i = U \Sigma V^T v_i = U \Sigma e_i = U (\sigma_i e_i) = \sigma_i u_i. $$ This equation gives the following: for every zero singular value, $Av_i = 0$ and thus $v_i \in \ker A$. On the other hand, for every non-zero singular value, $A v_i = \sigma_i u_i$ is not the zero vector. Since the $v_i$'s form a basis of $\mathbb{R}^n$, the number of zero singular values is equal to the dimension of the null space $\ker A$.