I'm trying to compare two datasets; a satellite image and some known points. In the satellite image I've got 20 to 50 detections that should match up to a set of lat/lon points that I have. There is generally a unknown uniform offset between the two datasets (this offset is a function of the position in the image, speed/direction of the object and size). This problem isn't too difficult when the points are far apart from each other but when they get close I start running into mismatching problems.
So in this dummy pic I've got some satellite image detections (red circles) that correspond to some ground truth samples (green squares). I generally end up with some false red circles (bad detections, artifacts etc) and some unmatchable green squares (resolution too bad to detect them, obscured by terrain). There is often a uniform offset that's not predictable (top left corner) that causes some mismatches. This can't usually be corrected for since it's a function of a bunch of variables and not just a geo-coding error. If I used shortest distance matching, for example, the samples in the dotted ellipse would be matched.
What's the correct algorithm to be using here? Some kind of global nearest neighbor matching with the input features being straight up x,y coords? Is there something like this in sci-kit learn or some other python lib? My maths background isn't great so I'm struggling to find the right keywords to search for.

I already mentioned that this is called the "simultaneous pose and correspondence registration" problem in a comment. Additionally, I want to add something on the mathematical side:
First, you may want to run a point set registration algorithm, for example Iterative Closets Points (ICP) to match the two point clouds (red and green) to match as close as possible.
Now assume for a moment that there are no outliers at all. Mathematically speaking, what you are hoping for is a minimum-weight perfect matching on the complete bipartite graph, where on the one side the vertices are the green squares, and on the other side the vertices are the red circles, and the weight of an edge is given by the square of the euclidian distance between the corresponding objects. In your initial post, you correctly observed that the nearest neighbor assignment not necessarily gives a minimum cost perfect matching. (It is both possible that neither a minimum cost matching, nor a matching at all emerge). There are fast algorithms for the minimum weight perfect matching problem, but you should definitely use a library, because these algorithms are very complicated and error-prone to implement by yourself.
Finally, you have to deal with outliers. One way to do this, is the following: First introduce artificial dummy vertices, in such a way that the number of red circles and green squares is equal, and let the distance between a dummy vertex and any other vertex always be equal to $C$, where $C$ is some fixed large number. Then run the minimum weight perfect matching algorithm and at the end discard all edges from the matching which have an unnaturally high weight.
Note: Again, I am not an expert, but I think this will lead to an acceptable solution. If you can find libraries for your problem online, you should prefer them to my solution.