Projection of vector $v$ onto a plane in $\mathbb R^k$

206 Views Asked by At

Suppose that $v$ is a $\mathbb R^k$ vector. $n_1,n_2,...,n_{k-2}$ are the normal vectors of a plane $H$. What is the projection of $v$ onto that plane?


If $k=3$ then the projection is $$x=v-(n\cdot v)n_1$$

3

There are 3 best solutions below

2
On BEST ANSWER

Without just throwing an answer and saying "this is it", I'll try a rational approach to what a projection is: a (orthogonal) projection of a vector $v$ onto a hypersurface $S$ is the point $p$ in $S$ such that the distance from $p$ to $v$ is minimized. To solve your problem, we will require that we know a point in the plane, and we will assume that the plane contains the point $0^k$. The normals to the plane are orthogonal to any point in the plane, so if $x$ is in the plane, then $(n_i,v)=0$ for all $1\leq i\leq k-2$, and $(n_i,v)$ represents the standard inner product. We then seek to minimize the function

$$f(x)=||x-v||_2^2=(x,x)-2(x,v)+(v,v)$$

subject to the constraints

$$g_i(x)=(n_i,x)=0$$

This is a classic Lagrange Multipliers problem. We then seek the critical points of the function

$$L(x,\lambda)=(x,x)+2(x,v)+(v,v)-\sum_{i=1}^{k-2}\lambda_i(n_i,x)$$

The derivative with respect to $x_j$ is

$$\nabla_{x_j} L=2x_j-2v_j-\sum_{i=1}^{k-2}\lambda_in_{ij}=0$$

We can create a system of equations that allows for us to solve for the $\lambda$ vector by multiplying by $n_{mj}$ and summing over all $j$ to get

$$(n_m,x)-(n_m,v)-\sum_{i=1}^{k-2}(n_i,n_m)\lambda_i=0$$

or as a linear system,

$$\begin{pmatrix} (n_1,n_1) & \cdots & (n_{k-2},n_1) \\ \vdots & \ddots & \vdots \\ (n_1,n_{k-2}) & \cdots & (n_{k-2},n_{k-2}) \end{pmatrix}\begin{pmatrix} \lambda_1 \\ \vdots \\ \lambda_{k-2}\end{pmatrix}=-\begin{pmatrix} (n_1,v) \\ \vdots \\ (n_{k-2},v)\end{pmatrix}$$

where we have used the orthogonality between $x$ and the normals. From here, there isn't anything we can do without assuming that the normals that we are given are orthonormal. This means that the matrix is simply the identity matrix, and we have

$$\lambda_i=-(n_i,v)$$

We return back to the derivative of the Lagrangian and replace the $\lambda_i$ terms, and we have

$$x_j=v_j-\sum_{i=1}^{k-2}(n_i,v)n_{ij}$$

and as vectors,

$$x=v-\sum_{i=1}^{k-2}(n_i,v)n_i$$

We have generalized the formula for three dimensions by simply adding to the subtraction term, which I view as "removing" the normals from the vector. Again, this required that the normal vectors were orthonormal.

Hope this helps!

0
On

Examine that formula that you have for $k=3$: it says that the orthogonal projection onto of $v$ onto the plane is obtained by subtracting its orthogonal projection onto the plane’s normal vector $n$ (which your formula assumes is a unit normal, by the way). In other words, the orthogonal projection onto the plane is equal to its orthogonal rejection from the plane’s orthogonal complement. This generalizes any subspace $W$ of $\mathbb R^n$: the orthogonal projection of $\mathbf v$ onto $W$ can be found by computing its projection onto $W^\perp$ and subtracting that from $v$.

In your question, you have a set of $n-2$ presumably linearly-independent vectors that are all orthogonal to the plane onto which you’re projecting, so per the above, one way to compute the projection onto the plane is to compute the projection onto the span of these vectors and subtract.

You haven’t said anything about these normal vectors other than they’re orthogonal to the plane, so it’s hard to give you a specific way to compute the projection onto their span. If the normals happen to form an orthonormal set, then you can simply extend your formula by adding a term for each of the $n_i$. If not, you’ll have to do something more complicated. Something that will always work is to apply the Gram-Schmidt process to the sequence $n_1,n_2,\dots,n_{k-2},v$. The last vector that it produces will be the orthogonal rejection of $v$ from the span of the $n_i$, which is exactly what you’re looking for. If you know something more about the normal vectors, there could be less tedious methods than this.

0
On

Let $e_1, ..., e_n$ be an orthonormal basis of the vector space. We have that for an arbitrary vector $v$, $$v = \langle v, e_1 \rangle e_1 + ... + \langle v, e_n \rangle e_n.$$ A plane $H$ that passes through the origin is a linear subspace, and is the span of k orthogonal vectors so WLOG, let the vectors be $e_1, ..., e_k$. Then the orthogonal projection of $v$ onto $H$ is simply $$P_H(v) = \langle v, e_1 \rangle e_1 + ... + \langle v, e_k \rangle e_k = v - (\langle v, e_{k + 1} \rangle e_{k + 1} + ... + \langle v, e_n \rangle e_n).$$

Intuitively, in terms of unit normal vectors, the orthogonal projection of a vector onto a plane removes the components of the vector in the direction of the normal vectors and you are left with a vector on the plane.