I have painstakingly ported this Python source "svd.py" to C++. I confirm it works for the example it comes with. While testing another example (this one, from Wikipedia), the assert statement trips because $m < n$.
I find this somewhat of a curious constraint. It seems to fail mainly because the matrix $U$ (named $u$ in the source) is initialized as an $m \times n$ matrix (as opposed to the $m \times m$ matrix it is in the definition of a SVD). It's pretty simple to initialize $U$ instead to an $m \times m$ matrix (and, say, pad with zeroes), but this causes some of the code to break for some other examples.
Why would $U$ be an $m \times n$ matrix, and how can I fix the algorithm to work on all sizes of matrices?
EDIT: If the SVD is $M=U_1\Sigma_1 V_1^*$, I came up with the idea of computing the SVD of $M^*$ to get $M^*=V_1\Sigma_1^*U_1^*=U_2\Sigma_2 V_2^*$. From here, I use that substitution to get the desired $U_1$, $\Sigma_1$, $V_1$ out. This produces a result that multiplies back through correctly, but is it a proper SVD?
Sounds like the routine is built for overdetermined systems. The standard theoretical trick is to work with $\mathbf{A}^{*}$ in lieu of $\mathbf{A}$. An underdetermined system is going to contend with infinite solution space.
Given a matrix of rank $\rho$, $$ \mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}, $$ the singular value decomposition is $$ \begin{align} \mathbf{A} &= \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \\ % &= % U \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cccccc} \sigma_{1} & 0 & \dots & & & \dots & 0 \\ 0 & \sigma_{2} \\ \vdots && \ddots \\ & & & \sigma_{k} \\ & & & & 0 & \\ \vdots &&&&&\ddots \\ 0 & & & & & & 0 \\ \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{V}_{\mathcal{N}}}^{*} \end{array} \right] \\ % & = % U \left[ \begin{array}{cccccccc} \color{blue}{u_{1}} & \dots & \color{blue}{u_{k}} & \color{red}{u_{k+1}} & \dots & \color{red}{u_{n}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}_{k\times k} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{v_{1}^{*}} \\ \vdots \\ \color{blue}{v_{k}^{*}} \\ \color{red}{v_{k+1}^{*}} \\ \vdots \\ \color{red}{v_{n}^{*}} \end{array} \right] % \end{align} $$ Your problem may correspond to $\rho = n$.
The least squares solution highlights the uniqueness issue. Given the solution vector $x\in\mathbb{C}^{n}$, and the data vectors $b \in \mathbb{C}^{m}$, and the restriction that $b\notin \color{red}{\mathcal{N}\left( \mathbf{A}\right)}$, the least squares solution to $\mathbf{A}x = b$ is $$ x_{LS} = \color{blue}{\mathbf{A}^{\dagger} b} + \color{red}{\left( \mathbf{I}_{n} - \mathbf{A}^{\dagger}\mathbf{A} \right) y}, \quad y\in\mathbb{C}^{n} $$ If the nullspace $\color{red}{\mathcal{N}\left( \mathbf{A}^{*}\right)}$ is trivial, then $\mathbf{A}^{\dagger}\mathbf{A} =\mathbf{I}_n$ and the solution is unique.
So yes, we can build the Hermitian conjugate with SVD components. Using the thin form $$ \mathbf{A} = \color{blue}{\mathbf{U}_{\mathcal{R}}} \mathbf{S} \, \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} % \qquad \Rightarrow \qquad % \mathbf{A}^{*} = \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \mathbf{S} \, \color{blue}{\mathbf{U}_{\mathcal{R}}} $$