Here is a problem I was looking at: Let production of Steel, Coal and electric power be $P_s, P_c$ and $P_e$ respectively. Their output (in column) and consumption (in row) are given:
$$ \begin{array}{|l|c|c|c|} \hline & Steel & Coal & Electricity \\ \hline Steel & 0.1 & 0.1 & 0.5 \\ Coal & 0.3 & 0.2 & 0.4 \\ Electricity & 0.6 & 0.7 & 0.1 \\ \hline \end{array} $$
To find equilibrium solution following system of equations is formed:
$$ \underbrace{\begin{pmatrix} -0.9 & 0.1 & 0.5 \\ 0.3 & -0.8 & 0.4 \\ 0.6 & 0.7 & -0.9 \end{pmatrix}}_A \begin{pmatrix} P_s \\ P_c \\ P_e \end{pmatrix} = \underbrace{\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}}_b $$
Neither of $P=A^{-1}b$ and $P=A^+b$ works, so they've come up with SVD to produce a solution. To find $A=U\Sigma V^T$ following code is used:
A = [-0.9 0.1 0.5;
0.3 -0.8 0.4;
0.6 0.7 -0.9];
sigma1 = A*A'; sigma2 = A'*A;
U = eye(3); V = eye(3);
for i = 1:15
[Q1,R1] = qr(sigma1);
[Q2,R2] = qr(sigma2);
U = U * Q1; sigma1 = R1 * Q1;
V = V * Q2; sigma2 = R2 * Q2;
end
S = diag(diag(sigma1)); S = sqrt(S);
U = round(U,5); Sigma = round(S,5); V = round(V,5);
With these $U,\Sigma$ and $V$ I get $U\Sigma V^T=-A$. To get correct result I've to multiply $V$ or $U$ by $-1$. With 15 iteration $U,\Sigma$ and $V$ are:
$$ \underbrace{\begin{pmatrix} -0.4852 & 0.6567 & 0.5774 \\ -0.3262 & -0.7485 & 0.5774 \\ 0.8113 & 0.0918 & 0.5774 \end{pmatrix}}_U\: \underbrace{\begin{pmatrix} 1.5836 & & \\ & 1.0547 & \\ & & 0 \end{pmatrix}}_{\Sigma}\: \underbrace{\begin{pmatrix} -0.5214 & 0.7211 & 0.4563 \\ -0.4928 & -0.6910 & 0.5289 \\ 0.6967 & 0.0509 & 0.7156 \end{pmatrix}}_V $$ If I change for i = 1:15 to for i = 1:115 and type U, Sigma and V in command window I get same thing but when I multiply $U\Sigma V^T$ I get:
$$ \underbrace{\begin{pmatrix} 0.9000 & -0.1000 & -0.5000 \\ -0.3000 & 0.8000 & -0.4000 \\ -0.6000 & -0.7000 & 0.9000 \end{pmatrix}}_{\texttt{for i = 1:15}}\quad\quad \underbrace{\begin{pmatrix} -0.0989 & 0.8572 & -0.5705 \\ 0.8385 & -0.2910 & -0.3197 \\ -0.7396 & -0.5662 & 0.8901 \end{pmatrix}}_{\texttt{for i = 1:115}} $$
Q1: What is the problem?
To find a solution last column of $V$ is chosen and normalized: $$ P= \frac{1}{0.7156} \begin{pmatrix} 0.4563 \\ 0.5289 \\ 0.7156 \end{pmatrix} = \begin{pmatrix} 0.6377 \\ 0.7391 \\ 1.0000 \end{pmatrix} $$
Q2: Why to choose last column of $V$?
Q3: Can I get same solution using $U$?
Q4: It is not unique and not least-square solution, What it is?
A similar conundrum is discussed here: Query about the Moore Penrose pseudoinverse method. On to your question.
Existence
Given the matrix $$ \mathbf{A} = \frac{1}{10} \left[ \begin{array}{rrr} -9 & 1 & 5 \\ 3 & -2 & 4 \\ 6 & 7 & -1 \\ \end{array} \right] $$ and the data vector $$ b = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ \end{array} \right] $$ what can we say about the linear system $$ \mathbf{A} x = b? $$
Using Gauss-Jordan elimination, $$ \begin{align} \left[ \begin{array}{c} \mathbf{A} & \mathbf{I}_{3} \end{array} \right] % &\mapsto % \left[ \begin{array}{c} \mathbf{E_{A}} & \mathbf{R} \end{array} \right] \\ %% \left[ \begin{array}{rrr|ccc} -\frac{9}{10} & \frac{1}{10} & \frac{1}{2} & 1 & 0 & 0 \\ \frac{3}{10} & -\frac{1}{5} & \frac{2}{5} & 0 & 1 & 0 \\ \frac{3}{5} & \frac{7}{10} & -\frac{1}{10} & 0 & 0 & 1 \\ \end{array} \right] % &\mapsto % \left[ \begin{array}{ccc|rrr} 1 & 0 & 0 & -\frac{130}{213} & \frac{60}{71} & \frac{70}{213} \\ 0 & 1 & 0 & \frac{45}{71} & -\frac{35}{71} & \frac{85}{71} \\ 0 & 0 & 1 & \frac{55}{71} & \frac{115}{71} & \frac{25}{71} \\ \end{array} \right] \tag{1} % \end{align} $$
Conclusion: All columns are fundamental. The matrix has full rank. The nullspace is trivial: $$\mathcal{N}\left( \mathbf{A}^{*} \right)=\emptyset$$
A nontrivial solution to the homogeneous equation does not exist. Solving the equation $\mathbf{A}x=\mathbf{0}$ is tantamount to asking for a vector in the nullspace. There are no such nontrivial vectors.
The existence drum is beaten often and loudly here on MSE:
Exact solution of overdetermined linear system
Existence of least squares solution to Ax=b
Unique least square solutions
Is a least squares solution to $Ax=b$ necessarily unique
Let $A$ be an 8 x 5 matrix of rank 3, and let $b$ be a nonzero vector in $N(A^{})$. Show $Ax=b$ must be inconsistent.
null space of a matrix for a matrix with a 0 column
Why does $A^{T}Ax=A^{T}b$ have infinitely many solution algebraically when $A$ has dependent columns?
Will 2 linear equations with 2 unknowns always have a solution?
When does the least squares solution exist?
SVD
$$ \mathbf{A} = \mathbf{U} \, \mathbf{S} \, \mathbf{V}^{*} \tag{2} $$
To clear up confusion of building the SVD, the steps are detailed.
1 Product matrix
$$ \mathbf{A}^{*} \mathbf{A} = \left[ \begin{array}{rrr} 1.26 & 0.27 & -0.39 \\ 0.27 & 0.54 & -0.10 \\ -0.39 & -0.10 & 0.42 \\ \end{array} \right] $$
2 Eigensystem decomposition
Eigenvalues: $$ \lambda \left( \mathbf{A}^{*}\mathbf{A} \right) = \left\{ 1.49952 , 0.453789 , 0.266694 \right\} $$ The singular values are computed using $$ \sigma = \lambda \left( \mathbf{A}^{*}\mathbf{A} \right) $$ Construct the $\mathbf{S}$ matrix: $$ \mathbf{S} = \left[ \begin{array}{ccc} 1.22455 & 0 & 0 \\ 0 & 0.673639 & 0 \\ 0 & 0 & 0.516425 \\ \end{array} \right] $$
Eigenvectors: $$ v_{1} = \left[ \begin{array}{l} -0.892021 \\ -0.287368 \\ \phantom{-}0.348883 \\ \end{array} \right], \quad % v_{2} = \left[ \begin{array}{l} \phantom{-} 0.256905 \\ -0.957414 \\ -0.131749 \end{array} \right], \quad % v_{3} = \left[ \begin{array}{l} \phantom{-} 0.371886 \\ -0.0278925 \\ \phantom{-} 0.927859 \end{array} \right] % $$ Construct the matrix $\mathbf{V}$, an orthonormal span for the row space: $$ \mathbf{V} = \left[ \begin{array}{ccc} \frac{v_{1}}{\lVert v_{1} \rVert} & \frac{v_{1}}{\lVert v_{2} \rVert} & \frac{v_{1}}{\lVert v_{3} \rVert} \end{array} \right] = \left[ \begin{array}{lll} -0.892021 & \phantom{-}0.256905 & \phantom{-}0.371886 \\ -0.287368 & -0.957414 & -0.0278925 \\ \phantom{-}0.348883 & -0.131749 & \phantom{-}0.927859 \\ \end{array} \right] $$
3 Column space matrix
$$ \mathbf{U} = \mathbf{A} \mathbf{V} \mathbf{S}^{-1} = \left[ \begin{array}{llc} \phantom{-}0.774591 & -0.583147 & 0.244844 \\ -0.0576372 & \phantom{-}0.320431 & 0.945517 \\ -0.629831 & -0.746501 & 0.214592 \\ \end{array} \right] $$
Moore-Penrose Pseudoinverse
The pseudoinverse is easily assembled using the SVD decomposition matrices.
$$ \mathbf{A}^{+} = \mathbf{V} \, \mathbf{S}^{-1} \, \mathbf{U}^{*} = \left[ \begin{array}{llc} -0.610329 & \phantom{-}0.84507 & 0.328638 \\ \phantom{-}0.633803 & -0.492958 & 1.19718 \\ \phantom{-}0.774648 & \phantom{-}1.61972 & 0.352113 \\ \end{array} \right] \tag{2} $$