I am trying to calculate the position of a point (POI) through GPS distance measurements, that I get from a "blackbox" system. I don't know where the point is (for testing purposes I can of course pick a point by coordinates and feed that to the blackbox!) but I can query the distance from any point I define (using latitude/longitude) to POI.
So obviously my first thought was, to use three points, query the distance, "draw" three circles (radius defined by distance) and calculate the POI by intersecting these circles.
The following problems arose:
- distance measurements are only given in integer format in kilometres (so for a distance of 850m I get a result of "1", 2812m results in a "3").
- even though the the interval of [0,1000m] seems to always result in a "1", it's (even after a lot of testing) not clear to me, how fractions of a kilometre are handled. Sometimes it seem to be [x.5, (x+1).5] maps to (x+1); sometimes [x.01, (x+1)] seem to map to (x+1)
Since I can't seem to resolve these uncertainties, I went with assuming that the measurements include some sort of stochastic error.
What did I try?
- Calculating/estimating the area of intersection of multiple (more than three) circles. Biggest problem is, that for a reading of "3" it could be that the distance is actually 3.12km. That could result in no intersection area at all. If I for example increase the radius of every circle with a reading of x to x+0.5, the accuracy of my solution drastically declines.
- A simple, iterative "search" using some sort of "error-descent": I pick a valid starting point, step into 8 directions (moore-neighbourhood) and calculate the distance from these 8 points to the border of every circle. The point with the smallest sum of squared errors gets to be the next starting point. I linearly reduce the step size during that process. This always generates a feasibly solution; but it's too often far from where I expect the point to be.
- A (pseudo-)random "trial-and-error" process: Query the distance to some points. Choose one point at random. Use the distance to pick a radius uniformly from the interval [dist-0.5, dist+0.5]. Use that radius to calculate a circle around that point. Step along that and measure the distance to all other circles (these are calculated using the measured distances without any random distortion). Pick the one point on the border of the circle that has the smallest squared sum of distances to all other circles. Repeat that n times and return the mean of all measurements as POI. That sometimes seems to be really effective (with lucky random radii), but can be far off (with some bad "luck"). Increasing n to really high numbers produces a similar result as the "error-descent".
It reads like the third method would be a good starting point. Problem is, that the calculation time starts to increase too much (and the results are not that great compared to the other ones).
Recap:
- I can measure the distance from any point on earth I choose to POI.
- The results are in kilometres and without any decimal place.
- I can take as many measurements as needed (but that slows down the algo, so less points are always prefered)
- I can (without any calculation) estimate POI by finding a point that results in a distance of "1" kilometre - the error could be >1km though
- My best estimate at the moment brings that done to an error of max 300m (which is already quite good, but not good enough sadly).
I'd be really thankful for any ideas/ algos/ papers to read/ ... that could point me in a new direction.
I am currently using the haversine formula to calculate distances. Could it be that the blackbox-system uses something else that is that far off?