In many error minimization or approximation models, they often do operations on "sum of squares" of the calculated value. (E.G. residual sum of squares)
What is the purpose of squaring the error? Is it to prevent negative values? If so, why not just use absolute value?
The sum of squared errors is the variance, an important quantity in statistics, with many convenient properties.
If your errors have a "normal" distribution, and the model is linear, minimizing squared error gives you the "maximum likelihood estimate". Briefly, this is a statistical best fit, and is likely the best you can do.
Even if the model is nonlinear, the sum of squared errors is a nice, smooth, easy-to-compute, easy-to-differentiate function, which is easy to optimize over. The sum of absolute values is not smooth where any of the values crosses zero, which makes optimization more difficult.
One disadvantage of the sum of squared errors is that "outliers" with large errors can skew the results disproportionately -- because their errors are large, so squaring them makes them worse. In this case, using the absolute error instead of the squared error can mitigate the problem, if you can structure your optimizer to handle the non-smooth error function.