I have a nonlinear least squares optimisation problem. With a particular choice of initialisation, it appears that the Gauss-Newton method always converges in one iteration for my problem. With a random initialisation, Gauss-Newton appears to still converge to the same solution but takes more iterations.
What does this say about my residual function?