this is my first post here, so I hope I'll word everything correctly. I am an amateur mathematician, who does his problems for fun.
I am tackling a system of non linear equations, with errors in the input values. I linearized the system, and itterated Least Squares, analogous to newtons method for finding zeros.
The problem arises, when I try taking erros in input into account, the problem becomes non linear once again. the error function is simply a sphere around the input point. But if i insert this directly into the equation, it leaves three of my variables beeing multyplined with eachother in many places.
My question is, what is the standard way to tackle errors in inputs, when it comes to least squares?
Would this be a valid option?
Introducing a pre LS, taking the input and current best solution from the main LS as inputs, and setting the equation, to return where on the sphere sourounding the input point, would give a better fit. Than using this to modify the input in the main LS, and doing so at each itteration? In other words, making it a two-step problem, and for each itteration of LS, finding the error, that would give the best fit and using it in that itteration.
I hope I was clear enough in my wording, if not, please say so.