Solution of overdetermined polynomial system

825 Views Asked by At

Some of you will find this question pretty straightforward to answer, but I desperately need some help in solving a problem involving several equations and 2 unknowns, for an engineering application.

I have a system of at least 2 equations:

$$(x+\Delta x_1)^2 + (y+\Delta y_1)^2 = \frac{A_1}{\Omega^2}$$ $$(x+\Delta x_2)^2 + (y+\Delta y_2)^2 = \frac{A_2}{\Omega^2}$$ $$(x+\Delta x_3)^2 + (y+\Delta y_3)^2 = \frac{A_3}{\Omega^2}$$ $$\vdots $$ $$(x+\Delta x_n)^2 + (y+\Delta y_n)^2 = \frac{A_n}{\Omega^2}$$

THe unknowns are $x$ and $y$, everything else is known. The more equations I have the more accurate my solution will be I suppose. $A_i$ is a measured value, so strictly speaking it should be written $A_i+\epsilon_i$

I would like to solve this numerically but I don't even know where to start :( Any help would be very useful!

Thanks!

1

There are 1 best solutions below

1
On BEST ANSWER

Such overdetermined systems are usually solved by the least-squares method. It works as follows: you form the sum of the squares of the residues $(LHS - RHS)$ of all equations. This sum is a function of $x$ and $y$, and you will minimize it. You express the minimum condition by zeroing the gradient and solve for $x$ and $y$.

In your case, this will lead to a nonlinear system of two equations in two unknowns that you normally solve iteratively, for instance with the Newton method, leading to the well-known Levenberg-Marquardt algorithm.

In your case, there is also an easy linear approach: subtracting the first equation from all the others, you get $n-1$ equations. As the $x^2$ and $y^2$ terms cancel out, those $n-1$ equations are linear. Then the least squares method leads to a trivial system of $2$ linear equations in $2$ unkowns.

You can opt to keep this solution as the final one, or use it as an initial approximation for the Levenberg-Marquardt iterations.

Beware that invalid data points (called outliers) can ruin the accuracy as the least-squares method is very sensitive to these. If this occurs, you may have to switch to the so-called robust fitting methods. One of the most popular is RANSAC.

In your case, you could implement it by taking the equations in triples, subtract the first from the other two and solve. Then plug the $(x,y)$ solution in the remaining equations and evaluate the total residue (accumulate $|LHS-RHS|$ instead of $(LHS-RHS)^2$). You will keep the solution with the smallest residue. If $n$ is not too large, it is even thinkable to try all $n(n-1)(n-2)/6$ triples.