Mean distance vs distance of means

39 Views Asked by At

I have a series of (x,y) coordinate pairs representing observations around two fixed points with random error in each observation. The random error is compensated for by taking the average of a large number of observations. I want to calculate the mean distance between the two points. Initially I calculated the vector length of the mean ${\Delta x}$ and $\Delta y$, as in: $$\sqrt{\mu(\Delta x)^2+\mu(\Delta y)^2}$$ where $\mu$ is the average and $\Delta$ is the difference between a pair of coordinate observations. Then I compared it with the mean of the vector lengths between each coordinate pair, as in: $$\mu\left(\sqrt{\Delta x^2+\Delta y^2}\right)$$ Intuitively I would have expected these values to be the same, but they're not. Why are they different, and which figure more accurately represents the distance between the two actual points? Can you point me to an article explaining the reasoning behind this?

Thanks and apologies if my notation is poor or the question not entirely clear.