I have around 1000 values for a gps receiver position like follows: All of these values represent the SAME POINT.

want to find error in values? What I am doing right now is that I am finding cartesian distance between the actual value and the mean value of all 1000 readings. Then I am representing cartesian distance on a graph so it represents error. I dont think this is the right approach. What formulaes should I use or methods I should employ to show error with the readings.
Due to the significant truncation of the values, the quantization error dominates. So a Gaussian model does not really fit, there will be some bias. A multinomial distribution would be more rigorous (and harder to estimate).
Unless the OP has serious reasons to do so, I don't see any need to precisely model the distribution. On the opposite, as an empirical measure of the spread, the average distance to the centroid should be enough.
If in addition some insight into the anisotropy is sought, then use the correlation matrix. (Note that the trace of the correlation matrix is the average squared distance.)