Invert vector projection onto a plane

1.2k Views Asked by At

Assume I have a vector $V_a$ that belongs to the plane $A$ defined by the normal $n_a$, which means that $V_a \cdot n_a = 0$.

Then assume that I projected $V_a$ onto a plane $B$ defined by its normal $n_b$ resulting in vector $V_b = V_a - (n_b \cdot V_a)n_b$.

If I know $V_b$, $n_a$ and $n_b$, how do I recover $V_a$? I initially thought that it would be as simple as projecting $V_b$ onto $A$, but a quick numerical test showed me that this isn't the case. Inverting the projection matrix also didn't work because $n_b$ is an null eigenvector.

Notes:

  1. I am interested in vector directions so all vectors are normalized
  2. Vectors are restricted to $\mathbb R^3$
  3. $n_a \cdot n_b \ne 0$
3

There are 3 best solutions below

1
On BEST ANSWER

I figured it out!

Starting point:
$V_a - x.n_b = V_b$

Computing the dot product of both side of the equation with $n_a$ gives:
$V_a\cdot n_a - x.n_b\cdot n_a = V_b\cdot n_a$

The first term is zero so solving for $x$:
$x=-\frac{V_b\cdot n_a}{n_b\cdot n_a}$

Substituting in the first equation:
$V_a = V_b - \frac{V_b\cdot n_a}{n_b\cdot n_a}.n_b$

0
On

The projection of $V_b$ onto $A$ is $V_b-(n_a\cdot V_b) n_a$. This is in the right direction, however the magnitude is smaller than $V_a$. In the first projection $|V_b|=|V_a|\cos\alpha$, where $\alpha$ is the angle between $n_a$ and $n_b$. We can write it as $|V_b|=|V_a|n_a\cdot n_b$. When you do the second projection, you get an extra factor of $\cos\alpha$. Therefore the final answer is: $$\frac{V_b-(n_a\cdot V_b) n_a}{(n_a\cdot n_b)^2}$$ You can see why you don't want the directions to be perpendicular, since you don't want to divide by 0.

1
On

So given $n_a$, $n_b$, $V_b$ (let n without subscript indicate the dimension) recalling the following relation:

$V_b = V_a - (n_b \cdot V_a) n_b $ we want to find $V_a$

This can be broken into a system of linear equations, so we have that

$$ \begin{bmatrix}V_{b,0} \\ V_{b,1} \\ \vdots \\ V_{b, n-1} \end{bmatrix} = \begin{bmatrix} V_{a,0} - (n_{b,0} V_{a,0} + n_{b,1} V_{a,1} ...) n_{b,0} \\ V_{a,1} - (n_{b,0} V_{a,0} + n_{b,1} V_{a,1} ... ) n_{b,1} \\ \vdots \\ V_{a,n-1} - n_{b,0} V_{a,0} + n_{b,1} V_{a,1} ... ) n_{b,n-1}\end{bmatrix}$$

This can be expressed as a matrix by

$$ \begin{bmatrix}V_{b,0} \\ V_{b,1} \\ \vdots \\ V_{b, n-1} \end{bmatrix} = \begin{bmatrix} 1 - n_{b,0}^2 & -n_{b,0}n_{b,1} & ... & -n_{b,0}n_{b,n-1} \\ -n_{b,0}n_{b,1} & 1 - n_{b,1}^2 & ... & -n_{b,1} n_{b,n-1} \\ \vdots & \vdots & \ddots & \vdots \\ -n_{b,0}n_{b,n-1} & -n_{b,1}n_{b,n-1} & ... & 1 - n_{b,n-1}^2 \end{bmatrix} \begin{bmatrix} V_{a,0} \\ V_{a,1} \\ \vdots \\ V_{a,n-1} \end{bmatrix}$$

Now you mention that this matrix isn't always invertible, so in that case there is one additional line we can add. Which is $V_a \cdot n_a = 0 $. Yielding:

$$ \begin{bmatrix}V_{b,0} \\ V_{b,1} \\ \vdots \\ V_{b, n-1} \\ 0 \end{bmatrix} = \begin{bmatrix} 1 - n_{b,0}^2 & -n_{b,0}n_{b,1} & ... & -n_{b,0}n_{b,n-1} \\ -n_{b,0}n_{b,1} & 1 - n_{b,1}^2 & ... & -n_{b,1} n_{b,n-1} \\ \vdots & \vdots & \ddots & \vdots \\ -n_{b,0}n_{b,n-1} & -n_{b,1}n_{b,n-1} & ... & 1 - n_{b,n-1}^2 \\ n_{a,0} & n_{a,1} & ... & n_{a,n-1} \end{bmatrix} \begin{bmatrix} V_{a,0} \\ V_{a,1} \\ \vdots \\ V_{a,n-1} \end{bmatrix}$$

Now the matrix is no longer square, so it seems we have made our problems worse. But here is the catch, we can use a tool called a generalized inverse to solve for $V_a$ (and in general it seems there can be more than one such $V_a$).

A quick google search yields: https://www.math.wustl.edu/~sawyer/handouts/GenrlInv.pdf

A different strategy altogether however is to instead look at the compound matrix

$$ \begin{bmatrix} 1 - n_{b,0}^2 & -n_{b,0}n_{b,1} & ... & -n_{b,0}n_{b,n-1} & | & V_{b,0} \\ -n_{b,0}n_{b,1} & 1 - n_{b,1}^2 & ... & -n_{b,1} n_{b,n-1} & | & V_{b,1} \\ \vdots & \vdots & \ddots & \vdots & | & \vdots \\ -n_{b,0}n_{b,n-1} & -n_{b,1}n_{b,n-1} & ... & 1 - n_{b,n-1}^2 & | & V_{b,n-1} \\ n_{a,0} & n_{a,1} & ... & n_{a,n-1} & | & 0 \end{bmatrix} $$

And to attempt to get this into reduced row echelon form. If the system is still degenerate (i.e. MORE THAN 1 row of zeroes) then you just have to accept that $V_a$ is not unique.

Note: that this doesn't include @Andrei's correction so you need to incorporate that as well before using this.