I have seen this formula in multiple domains, e.g., machine learning. I would like to understand what is it doing exactly, what is the effect of this function? How does it do better than other methods? I dont hold a math degree, but I do want to understand why..
in machine learning, i could not recall (for error calculation?):
$\sqrt{\sum{(x^2_1+x^2_2+\ldots+x^2_N)}}$
Given a point in a plane, that is $x_1$ meters to the north of you and $x_2$ meters to the East. The distance is simply calculated by using the pythagoean theorem by $\sqrt{x_1^2 + x_2^2}$.
This can be extended to higher dimensions, by adding more squares under the root. This is then called the euclidean length of a vector $x$.
Usually, the euclidean distance of an error is used to define the accuracy. Meaning: You have a known solution $s$ and an approximation $x$. You define your error $e = \|s-x\|_2$, since that is an easy and consistant way to map a multidimensional vector to a single posiive number. This error $e$ can than be used as objective-function in a minimization-process to determine parameters, that helped you obtain $x$ in the first place.
(I think, that that is, what is done at machine-learning, I'm no expert in that field)