In Hartley and Zisserman's book Multiple View Geometry in computer vision, when it comes to data normalization, it states
Namely the points should be translated so that their centroid is at the origin, and scaled so that their RMS (root-mean-squared) distance from the origin is $\sqrt{2}$.
Assume I have a matrix in which the 2D image points are stored column by column, e.g.
\begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{matrix}
where the first row stores the $x$ coordinates and the second row stores the $y$ coordinates. So the centroid may be computed with
Eigen::Vector2d centroid = observations.rowwise().mean();
Now if I have 1D $n$ numbers of a discrete distribution, the RMS error can simply be computed with the squara root of the mean sqaured deviation.
But I don't know HOW to compute the RMS distance between the 2D points and the origin, methematically and programmingly.
Can someone shed some info to me?