Let's say you have two sets of M points $p_1...p_M$, and $q_1...q_M$, which reside in $\mathbb{R}^N$. Is there an efficient (e.g. polynomial in M and N) algorithm to determine if the point-sets are the same up to some orthogonal transformation? I.e. $\exists O$ where $O^T O=I$ s.t. $p_i = O q_i, i=1...M$.
is there an efficient algorithm for comparing collections of points?
1.7k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 3 best solutions below
On
First of all, the answer to the question as stated is clearly negative: You have to take into account not only the number of points but also their coordinates. For simplicity, I will work with points that have only integer coordinates written in binary form. To do any computation with a vector ${\bf x}=(x_1,...,x_N)$ you first have to be able to write it down which has the complexity comparable to $$ ({\bf x}):=\sum_{i=1}^N\log(|x_i|).$$ Therefore, the right question is a polynomial algorithm in terms of $$ (p):= \sum_{j=1}^M ({\bf p}_j)$$ where each ${\bf p}_j$ is a vector, and same for $q$'s. I will consider the generic case when all the squares of Euclidean norms $$ |{\bf p}_j|^2, i=1,...,M$$ are distinct and all the norms $$ |{\bf q}_j|^2, i=1,...,M$$ are also distinct. Then you can first apply a sorting algorithm to arrange the numbers $|{\bf p}_j|^2, j=1,...,M$ and $|{\bf q}_j|^2, j=1,...,M$ in the decreasing order. Such sorting can be done in polynomial (actually, subquadratic) time. I assume, it is already done. Then any orthogonal transformation sending $p$'s to $q$'s has to send ${\bf p}_i$ to ${\bf q}_i$. Then the orthogonal transformation exists if and only if the corresponding inner products $$ \langle {\bf p}_j, {\bf p}_k\rangle , \langle {\bf q}_j, {\bf q}_k\rangle $$ are all the same (for each pair $(j,k)$, $j<k$). [Verifying if and only if is elementary linear algebra.] Comparing these inner products can be done in polynomial time with respect to $(p)$ and $(q)$.
Similar polynomial solution exists in the nongeneric case as well, it involves sorting (some of) the above inner products in the decreasing order. However, the details are a bit long for the margins provided by the MSE.
On
Yes. Compute the singular value decomposition of the two sets of points (as matrices of vectors of their coordinates). If the singular values do not agree, then there is no orthogonal matrix mapping one to the other. If the singular values do agree, then the singular vectors are an orthogonal coordinate system in which the two sets may be identical, which is easy to check since now you may sort the points with respect to their projection onto the first singular vector, which splits them into bins -- if the bins are all the same cardinality, sort each bin by projection onto the second singular vector, which splits them into subbins -- if the subbins are all the same cardinality, sort each subbin by projection onto the third singular vector, ..., and so on. Either some sub-...-subbin has mismatched cardinality, so that the point sets are not orthogonally related, or each bin is reduced to cardinality one, so is trivial to verify matches, or is reduced to sets of points whose coordinates agree completely. At each step, you either find the points don't match, or you end up showing that they do.
Note that this method will actually determine if there is a translation followed by a rotation that maps the one point set onto the other. (This is because the SVD ignores the mean of the set of points, and you can too: translate one mean to the other then rotate.) If you do not want to permit the two point clouds to have different means, then check for this first. If they do have matched means, then apply the above.
As an aside, since the problem doesn't actually request the orthogonal matrix: Transforming the set of singular vectors for one point set to the set of singular vectors to the other, to compute the orthogonal transformation between them, is straightforward. (Really. It's just a change of basis between orthogonal bases.)
The SVD is $O(N^2M + M^3)$. The projections are at worst $O(NM)$. The sorting is at worst $O(NM \log M)$ (since we're only sorting on the pseudo-real number projections, not some sort of lexicographic order) and cannot be this bad since the direction of the largest dispersion in the set of points is captured in the first singular vector (so there can't be only a single occupied bin in the first few recursions of the sorting-into-bins phase). We can actually be sloppy in the sorting-into-bins phase and only sort into bins instead of to precise projections. (This will be necessary if the projections are computed to finite precision since we would not want to reject two points which agree up to the precision of the multiply and accumulates in the comparison.) The asymptotic running time estimates above assume constant time arithmetic, which may not be the case if you are using something more complicated that machine precision floating point representations of the coordinates.
I call this the brute-force method and I don't know enough to do Big-O analysis on it, but here ya go.
You're asking wheter there exists $O = (o_{ij})$ such that $p_i = O q_i$. But each $p_i$ is in $\Bbb{R}^N$, so it's the same as asking whether there's a special linear operator
$\mathcal{O} = \begin{pmatrix} O & 0 & \dots & 0 \\ 0 & O & \dots & 0 \\ \vdots & & \ddots\\ 0 & 0 & \dots & O\end{pmatrix}$ (an $NM \times NM$ matrix with $O$ expanded in the obvious way) such that $p = \begin{pmatrix} p_1^1 \\ p_1^2 \\ \vdots \\ p_1^N \\ \vdots \\ p_M^1 \\ \vdots \\ p_M^N\end{pmatrix} = \mathcal{O} q$. The only problem now is that the quantities that you're solving for reside in a matrix when it would be nicer to have them be in the vector that you're multiplying a matrix by. That way you can use standard augmented matrix methods with row-elimination.
So find the sparse $NM \times N^2$ matrix $\mathcal{Q}$ with entries from $q$ such that $\mathcal{Q} o = p$ where $o$ is the vector of entries of $O$ that you're solving for.
This doesn't address the orthogonal transformation part. For that prove that $p_i = O q_i$ and $O^TO = I$ iff $p_i = O q_i$ and $q_i = O^T p_i$. Then form $p$ and $\mathcal{Q}$ from $q_i$'s as well as $p_i$'s.
Now solve for $o$ using standard row-elemination.