I am aware that that there are a lot of methods known to solving a nonlinear system $\mathbf{f}(\mathbf{x})=0$, if $\mathbf{f}:\mathbb{R}^n\to\mathbb{R}^n$, assuming the Jacobian is non-singular. However, I am wondering what happens if $\mathbf{f}:\mathbb{R}^m\to\mathbb{R}^n$, with $m\neq n$. Of course, there can be infinitely solutions, or no solutions at all, but what are good numerical ways for solving such a system? I am aware that we can alter the Newton-Rhapson iteration by replacing $\mathbf{J}_f(\mathbf{x^k})^{-1}$, by its Moore-Penrose inverse. However, the following paper by Levin and Israel suggests generalizing this to arbitrary $\{2\}$-inverses. I am wondering what the numerical advantage of this higher generalization is, compared to the Moore-Penrose inverse. I don't see why it would reduce the computation time, since it computes and SVD of the Jaobian $\mathbf{J}_f(\mathbf{x}^k)$ anyway, so why would one not directly obtain the Moore-Penrose inverse, but construct the $\{2\}$-inverse $\Sigma^{(2)}$.
EDIT: I might think the construction of $\Sigma^{(2)}$ is beneficial if the singular values are small, preventing the emergence of enormous in the iteration. If the singular values are not very small, the $\Sigma^{(2)}$ matrix will be the Moore-Penrose inverse.
A NEWTON METHOD FOR SYSTEMS OF m EQUATIONS IN n VARIABLES
Another paper by the same authors suggests an inverse-free method for solving the system, by Newton's directional method. What advantage could this method have over the one described above, using the pseudo-inverses.
AN INVERSE-FREE DIRECTIONAL NEWTON METHOD FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS
As you might understand, I am getting a little bit lost in the different methods, and I am wondering if anyone could give me a good overview in which situation, which method can best be applied. Thanks in advance!
The idea behind many approximate-inverse Newton-type algorithms (e.g. BFGS and the like) is the following: "Computing an SVD at every iteration is prohibitively expensive. Instead, let's just update our inverse Jacobian at every iteration using a Sherman-Morrison formula. This formula gives us the new approximate inverse on a silver platter, instead of requiring us to do an entire SVD computation."
There are some nice advantages in this sort of scheme: