Alternative to minimum squared method

66 Views Asked by At

I'm writing a Matlab code where I need to find the minimum of a function and to do that I compute the distance between two points

$$ d = \sqrt{\sum (x_{ideal} - x_{real})^2 + (y_{ideal} - y_{real})^2} $$

The problem is that in this way with the parameters that I use, I cannot find a single minimum when I ask to Matlab to find the minimum value. Here, in fact, I have a minimum between $0.9$ and $1.25$ with the values of $x,y$ that I used and Matlab takes the first minimum values that takes, while the correct one is supposed to be $1$.

screen of behaviour

In the $x$-axis I have the varying parameter (going from $0$ to $2$) while in the $y$-axis I plot the distance computed with the previous formula. $x_{ideal}$ and $y_{ideal}$ are fixed while $x_{real},y_{real}$ change depending on the varying parameter.

To obtain a single minimum I would need to compute the distance in a way that its behaviour is parabolic. $d^2$ makes no sense, while I don't think that squaring the ideal and real $x-y$ helps here. Is there another way to compute this distance instead of using the least squared method?

I need to get a parabolic behaviour because in this way, with gradient optimization methods I can find a unique solution, while right now introduces me a lot of error due to this distance function.

Interpolations are not an option

1

There are 1 best solutions below

1
On

The way you formulated the problem, you always end up with symmetric behavior around the point of interest, and your distance is basically the square of the error in question.

For example, if your ideal $x,y = (0,0)$, there will be no difference between the solution $(1,1)$ and $(-1,-1)$ or for that matter, any $(x,y)$ on the unit circle.

You are saying the correct solution is $x=1$ in your example, but what makes it the correct solution? It must be you have other criteria of selection not expressed in the problem.

My suggestion is to convert these unmentioned criteria into a penalty term, to allow the optimizer to choose the optimum correctly. For example, add some minimal (but noticeable) factor to the difference whenever you are away from $1$.