Usefulness of absolute value in optimization algorithms

100 Views Asked by At

In a course of Optimization Algorithms at university, professor said that in every algorithm the objective/object function/function cost is defined as: $$f(\bar x)=\lvert x_0 - g(\bar x)\rvert^{2}$$ where $x_0$ is the actual (or effective) requirement value the algorithm needs to reach, $\bar x$ is an attempt solution vector, $f$ defined as $f:\Bbb R \to \Bbb R$, $g$ defined the same as $f$, and $g$ could be a simulator/another function calculating the $\bar x$ vector.

My curiosity is about that absolute value.

I asked the professor what is the usefulness of that absolute value, since the power of two should turn that difference to a positive value anyway. He replied that it helps to reduce errors, and it's a matter of functional analysis (calculus IV ?), so I don't have the bases to understand it.

My question is: is that right? Is there the possibility to understand in simple mathematics why that absolute value is useful, and why I should not write simply $$f(\bar x)=( x_0 - g(\bar x) )^{2}$$ ?

I'm an italian computer engineer studying in Italy and my background are just Calculus I and II.

Thank you in advance.