Reconstruction of square from normvectors to the corners (perspective view of square).

77 Views Asked by At

I have the following case (three-dimensional space): A square is projected with a perspective projection into a point, and the norm vectors directed to the four corners of this square are known.

The four norm vectors are $\vec{A}, \vec{B}, \vec{C}, \vec{D}$. They point to (but do not reach!) the fours corners of the square with sides of length $l$ (in rotational order, so $\vec{A}$ points to the opposite corner of the one $\vec{C}$ points to). My question is: how to find the coordinates of the corners of the square, given these normal vectors and $l$? There is a certain error in $\vec{A}, \vec{B}, \vec{C}, \vec{D}$ and i'm not sure if this means there might not be a solution. If there is no exact solution, an approximation or something similar to least-squares is welcome.

Thanks for any help!

PS. I have tried to use wolfram cloud but it does not find an answer using the following formulation (due to a timeout):

Solve[{ 
    Norm[a*{a1,a2,a3} - b*{b1,b2,b3}] == l,
    Norm[b*{b1,b2,b3} - c*{c1,c2,c3}] == l,
    Norm[c*{c1,c2,c3} - d*{d1,d2,d3}] == l,
    Norm[d*{d1,d2,d3} - a*{a1,a2,a3}] == l,
    Norm[a*{a1,a2,a3} - c*{c1,c2,c3}] == l*sqrt[2],
    Norm[b*{b1,b2,b3} - d*{d1,d2,d3}] == l*sqrt[2]
    }, {a,b,c,d}]
2

There are 2 best solutions below

0
On BEST ANSWER

I've found a method to solve my own problem. We use value $a,b,c$ as the distances of the corners pointed to by respectively $\vec{A},\vec{B},\vec{C}$.

  1. Use a searching method for "optimal" $a$.
  2. For each $a$, set a corner of the square at $a\vec{A}$.
  3. Use trigonometry to find the possible locations of the opposite corner of the square at $c\vec{C}$. There can 0, 1, or 2 possible values for $c$.
  4. For each of the values of $c$, find the third corner of the square using $b\vec{B}$. Per value of $c$ there are again 0, 1, or 2 options for $b$.
  5. For each version of the triangle $a, b, c$ the fourth corner of the square can now be computed exactly at $\vec{D'}$. The error of this version of the square (choices $a, b, c$) can now be determined by angle between $\vec{D'}$ and $\vec{D}$.
  6. Select the version of the square such that you have the lowest error.
10
On

You have an issue that falls into the category of "Procustes problems".

In this Wikipedia article you will see that the key fact is to work on the $3 \times 3$ matrix : $C=BA^T$ where

$$A=\begin{pmatrix}a_1&b_1&c_1&d_1\\a_2&b_2&c_2&d_2\\a_3&b_3&c_3&d_3 \end{pmatrix}$$

(your notations : the coordinates of the 4 points, each one in a column) and

$$B=\ell/2\left(\begin{array}{rrrr}1&1&-1&-1\\1&-1&-1&1\\a&a&a&a \end{array}\right)$$

which is the "perfect (horizontal) square" (please note the $\ell/2$ factor and the fact that $a \ell/2$ is the unknow distance of origin to the horizontal plane of reference. We will have to take several values of $a$ in a reasonable range).

Then ask for the Singular Value Decomposition (SVD) of $C$ (using an adequate software) :

$$C=U\Sigma V^T$$

where $U,V$ are orthogonal $3 \times 3$ matrices (and $\Sigma$ the diagonal matrix of the so-called "singular values").

Then the rotation which brings the columns of $A$ "at best" onto the columns of $B$ is

$$R=UV^T$$

Moreover the discrepancy (the degree to which the real points are close or not to a perfect square) is measured by the (squared) Frobenius norm of the gap :

$$d=\|B-RA\|^2_F$$

which is nothing else than the sum of the squares of all entries of matrix $B-RA$

(which is naturally $0$ in the case of a "perfect matching").

Remarks :

1) About $a$ : either we take brute force sweemping, or if processing times matter, we can take an initial value of $a$ and then replace $a+\Delta a$ with a small increment $\Delta a$ ; if the discrepancy obtained with $a+\Delta a$ is smaller than with $a$, it will indicate not only that the direction of move is the good one but will even allow a convergence acceleration.

2) It may happen that $R$ is a symmetry matrix instead of a rotation ; this happens only if the vectors in matrix $A$ haven't been placed according to a direct orientation.

3) Matlab program :

A=[0.6 0.5 0.4 0.6
   1.1 0.9 -0.9 -1.1
   1.1 -1.1 -1 1.1];
for k=1:20
   a=k/10;
   B=[1 1 -1 -1
      1 -1 -1 1
      a a  a  a];
   C=B*A';
   [U,S,V]=svd(C);
   R=U*V';
   s(k)=sum(sum((B-R*A).^2));% square of Frobenius norm 
end;
s, % minimum for a=0.5 in this example