My problem
I start out with a set of rays (consisting of an origin and bearing), 2 or more. Each ray is a 'guess' as to the location of an object, but there's no guarantee it's a good guess, they may be pointing in very different directions.
Note: I am working in 3d space, but 2 dimensions can be assumed to make it easier. Ultimately, I would prefer to find a solution that will work in both spaces but at the moment the simplest solution wins.
Among this list of rays, I want to find the point with the minimum distance from each line. Essentially, the best 'guess' as to the location of the object should be that point.
Here's a (masterfully drawn) picture to clarify further:
My Approach
In thinking this through, I believe I must take this two rays at a time and find the line segment that represents the shortest distance between them. Once I have that line, find its midpoint. Do this for each pair of rays, until I have a collection of midpoints, which I will then average to find the closest point.
I am not 100% confident in this approach, since a pair of rays that are close to parallel will have an intercept very far away that will throw off the average.
I began implementing the solution from a related question, but it gives me poor results when I restrict it to 2 dimensions (by setting the third dimension to the same value) and I'll be honest in saying that it was bit over my head. Images of my code and poor results is below.
Question
Is there an existing algorithm for this problem? If not, do you have any suggested approaches? I am out of my element here, but I need to find a solution and I appreciate your help!
Solution: Code based on Mauricio's Answer
Using the Mauricio's answer and scipy.optimize.least_squares, I found a solution that works in 2D and 3D. I haven't explored all the different options for least_squares, but all tests indicate this is doing what I expect.
Here's an image of the performance, which is what I would expect from this:
Green is the 'guess', the black lines are the rays.
Code
I have only included the most important parts, namely the main function (locate) and the optimization function (distance).
locate takes in a list of rays and feeds them (as well as an initial guess) into a least squares optimizer.
distance is $F_i(P)$ from Mauricio's algorithm below, and just determines the distance (for each parameter) from the point to the projection of each ray.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import least_squares
def locate(rays):
"""
Determine the closest point to an arbitrary number of rays, and optionally plot the results
:param rays: list of ray tuples (S, D) where S is the starting point & D is a unit vector
:return: scipy.optimize.OptimizeResult object from scipy.optimize.least_squares call
"""
# Generate a starting position, the dimension-wise mean of each ray's starting position
ray_start_positions = []
for ray in rays:
ray_start_positions.append(ray[0])
starting_P = np.stack(ray_start_positions).mean(axis=0).ravel()
# Start the least squares algorithm, passing the list of rays to our error function
ans = least_squares(distance, starting_P, kwargs={'rays': rays})
return ans
def distance(P, rays):
"""
Calculate the distance between a point and each ray
:param P: 1xd array representing coordinates of a point
:param rays: list of ray tuples (S, D) where S is the starting point & D is a unit vector
:return: nx1 array of closest distance from point P to each ray in rays
"""
dims = len(rays[0][0][0])
# Generate array to hold calculated error distances
errors = np.full([len(rays)*dims,1], np.inf)
# For each ray, calculate the error and put in the errors array
for i, _ in enumerate(rays):
S, D = rays[i]
t_P = D.dot((P - S).T)/(D.dot(D.T))
if t_P > 0:
errors[i*dims:(i+1)*dims] = (P - (S + t_P * D)).T
else:
errors[i*dims:(i+1)*dims] = (P - S).T
# Convert the error array to a vector (vs a nx1 matrix)
return errors.ravel()


A ray can be parameterized as $R(t) = S + t D$ where $S$ is the starting point, $D$ is the direction unit-vector and $t$ is a positive scalar.
The closest point on the ray to a point $P$ is the projection of the point on the ray or the point $S$ itself. The projection is given by:
$t_P = D \cdot (P - S) / (D \cdot D)$
The distance $E$ is:
$E = \| P - (S + t_P D) \|$ iff $t_P > 0$ $E = \| P - S \|$ iff $t_P \leq 0$
The problem to solve is then minimize the functional:
arg $\min_P \sum_i^n{ E_i(P)^2}$
The problem is linear least squares, however since the distance is piecewise function, probably will need to iterate on the solution using Gauss-Newton kind of solver until reaching covergence.
Let us define a function $F_i(P)$ that returns a column vector, such that $E_i(P)^2 = F_i(P)^T F_i(P)$. Such function is defined as:
$F(P) = P - (S + t_P D)$ iff $t_P > 0$ $F(P) = P - S$ iff $t_P \leq 0$
For using Gauss-Newton solver to find a solution you have to provide an starting point $P$ (that can be the average of ray locations), a $3n \times 1$ vector of errors (assuming 3D vectors are used), which is stacking the output of individual $F_i(P)$, and a $3n \times 3$ jacobian matrix, which is stacking the individual $3 \times 3$ jacobian matrix of $F_i(P)$.