Formula relating three coplanar vectors

192 Views Asked by At

Let $\mathbf{\vec{u}}$, $\mathbf{\vec{v}}$ and $\mathbf{\vec{w}}$ be Euclidean vectors in $\mathbb{R}^3$ such that:

  • All of the vectors $\mathbf{\vec{u}}$, $\mathbf{\vec{v}}$ and $\mathbf{\vec{w}}$ are non-zero (i.e., $\mathbf{\vec{0}}\notin\{\mathbf{\vec{u}},\mathbf{\vec{v}},\mathbf{\vec{w}}\}$).
  • The vectors $\mathbf{\vec{u}}$ and $\mathbf{\vec{v}}$ are non-collinear (i.e., $\mathbf{\vec{u}}\wedge\mathbf{\vec{v}}\neq\mathbf{\vec{0}}$).
  • The vectors $\mathbf{\vec{u}}$, $\mathbf{\vec{v}}$ and $\mathbf{\vec{w}}$ are coplanar (i.e., [$\mathbf{\vec{u}}$,$\mathbf{\vec{v}}$,$\mathbf{\vec{w}}]=0$).

Then, a fortiori, there exist real numbers $\lambda$ and $\mu$ such that: $$\mathbf{\vec{w}}=\lambda\mathbf{\vec{u}}+\mu\mathbf{\vec{v}}$$

My question is the following:

"Is there a general formula for $\lambda$ and $\mu$ in terms of the vectors $\mathbf{\vec{u}}$, $\mathbf{\vec{v}}$ and $\mathbf{\vec{w}}$ in $\mathbb{R}^3$ (or, if necessary, in $\mathbb{R}^2$)?"

EDIT: I know of a more general formula relating four Euclidean vectors $\mathbf{\vec{u}}$, $\mathbf{\vec{v}}$, $\mathbf{\vec{w}}$ and $\mathbf{\vec{x}}$ in $\mathbb{R}^3$, which I state as follows: $$[\mathbf{\vec{u}},\mathbf{\vec{v}},\mathbf{\vec{w}}]\neq 0 \implies \mathbf{\vec{x}}=\displaystyle\frac{[\mathbf{\vec{x}},\mathbf{\vec{v}},\mathbf{\vec{w}}]}{[\mathbf{\vec{u}},\mathbf{\vec{v}},\mathbf{\vec{w}}]}\mathbf{\vec{u}}+\frac{[\mathbf{\vec{u}},\mathbf{\vec{x}},\mathbf{\vec{w}}]}{[\mathbf{\vec{u}},\mathbf{\vec{v}},\mathbf{\vec{w}}]}\mathbf{\vec{v}}+\frac{[\mathbf{\vec{u}},\mathbf{\vec{v}},\mathbf{\vec{x}}]}{[\mathbf{\vec{u}},\mathbf{\vec{v}},\mathbf{\vec{w}}]}\mathbf{\vec{w}}$$ I ask my question above in search for an analogous formula which relates three coplanar Euclidean vectors, if there is one at all.

2

There are 2 best solutions below

8
On BEST ANSWER

Let us take a dot product of $w = \lambda u + \mu v$ with $u$ and with $v$.

Thus we have $$(w, u) = \lambda (u,u) + \mu(u,v)$$ $$(w, v) = \lambda (u,v) + \mu(v,v)$$ Hence we have 2 equations for 2 variables.

Put $A =(u,u) (v,v) - (u,v)^2$, $B = (w,u)(v,v) - (u,v)(w,v)$ and $C= (u,u) (v,w) - (u,w) (u,v)$. Hence $\lambda = \frac{B}{A}$ and $\mu = \frac{C}{A}$.

0
On

The following is a generalization of Botnakov N.'s approach. Suppose we have $m$ linearly-independent nonzero (column) vectors $v_1,v_2,\ldots,v_m \in\mathbb{R}^n$. If $u\in\mathbb{R}^n$ is in the span of $\{v_k\}_{k=1}^n$, how do we express $u$ as a linear combination of these vectors?

To simplify notation let $V=(v_1,v_2,\ldots, v_m)\in\mathbb{R}^{n,m}$, i.e., $V$ is a matrix with the $v$'s as column vectors. We seek a vector $c\in\mathbb{R}^m$ such that $u=V c = \sum_{k=1}^m v_k c_k.$ Since $V$ is not a square matrix in general, we can't simply invert it. But if we we act on the left with $V^\top$ we obtain $V^\top u = V^\top Vc$, and $V^\top V\in\mathbb{R}^{m,m}$ is invertible (its determinant is always non-negative). Hence we obtain $$c=(V^\top V)^{-1}V^\top u.$$

The matrix $V^+:=(V^\top V)^{-1}V^\top $ is known as the Moore-Penrose pseudoinverse of $V$. On the one hand, we have $V^+ V = I_m$ by construction. But in general $V V^+\neq I_n$: It is a left inverse but not a right inverse. (This owes to $V$ having linearly-independent columns.) But, as the above work shows, it does act as a right inverse on the span of the $v$'s—that is, as a right inverse on the column space of $V$.

This kind of computation is moreover typical of ordinary least-squares computations, with $V^\top u=V^\top Vc$ being the so-called Gauss normal equations. To illustrate the connection, suppose that $u$ does not lie in the column space of $V$. Then $u^+:=V^+ u =V (V^\top V)^{-1} V^\top u\neq u$: The vector $u$ isn't in the column space of $V$, so we cannot and do not obtain $u$ as a linear combination of the $v$'s. However, we do have $$V^\top u^+ = V^\top V^+u = V^\top V(V^\top V)^{-1}V^\top =V^\top u.$$ Therefore $V^\top (u-u^+)=0$, i.e., $u-u^+$ is perpendicular to the column vectors of $V$. That is, $u-u^+$ is the orthogonal projection of $u$ onto the column space of $V$. This implies that $u^+$ is the unique vector in the column space of $V$ which minimizes $\|u-u^+\|^2$—that is, it's the ordinary least-squares solution to the overdetermined system $u=Vc$.