Why is the Right Hand Rule for the vector product $\vec{a}\times\vec{b}$ true?

277 Views Asked by At

Why is the Right Hand Rule true? The only thing that I'm searching for is its justification.

Remember that $$\vec{a}\times\vec{b}=\begin{vmatrix} a_{2} & a_{3}\\ b_{2} & b_{3} \end{vmatrix}\hat{i}-\begin{vmatrix} a_{1} & a_{3}\\ b_{1} & b_{3} \end{vmatrix}\hat{j}+\begin{vmatrix} a_{1} & a_{2}\\ b_{1} & b_{2} \end{vmatrix}\hat{k}$$

5

There are 5 best solutions below

1
On

If what you're asking is whether or not the right hand rule makes sense, it does for the following reason:

The cross product of two linearly independent vectors $a$ and $b$ is defined as a vector that is perpendicular (orthogonal) to both $a$ and $b$ written as $a \times b$, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors $a$ and $b$ span. So, the cross product is a binary operation on $\mathbb{R}^3$ and the right hand rule gives the direction of the vector $a \times b$.

2
On

If $R$ is a proper rotation matrix in $3$ dimensions, it's well-known that $Ra\times Rb=R(a\times b)$. (It's been proven here a few times, e.g. here.) So in proving the right-hand rule, we can assume without loss of generality that $\vec{a}=a\hat{i},\,\vec{b}=b_x\hat{i}+b_y\hat{j}$ with $a,\,b_x,\,b_y\ge0$. Then $\vec{a}\times\vec{b}=ab_y\hat{k}$ is a non-negative multiple of $\hat{k}$, as required.

0
On

The cross product in $3$-space is a lucky coincidence.

Actually, the cross product of two vectors lives in a different space, namely a component of the exterior algebra on $\mathbb{R}^3$, which has a multiplication operation often denoted by $\wedge$. The lucky coincidence is due to

  1. the space we live in is three-dimensional;
  2. the exterior algebra is a direct sum of a space of dimension $1$, $\mathbb{R}^3$, the space of $2$-vectors, which has dimension $3$, and another space of dimension $1$.

Since the space of $2$-vectors has dimension $3$ and it has as basis the $2$-vectors $\mathbf{i}\wedge\mathbf{j}$, $\mathbf{i}\wedge\mathbf{k}$ and $\mathbf{j}\wedge\mathbf{k}$, one can define an isomorphism of this space onto $\mathbb{R}^3$ by deciding where the basis vectors should go.

A quite natural decision is to do $\mathbf{i}\wedge\mathbf{j}\mapsto\mathbf{k}$, $\mathbf{i}\wedge\mathbf{k}\mapsto-\mathbf{j}$ and $\mathbf{j}\wedge\mathbf{k}\mapsto\mathbf{i}$.

With this choice it turns out that the wedge product in the exterior algebra of two vectors is mapped to a vector that's defined in the usual way using the right-hand rule. If you choose $\mathbf{i}\wedge\mathbf{k}\mapsto\mathbf{j}$, then you'd be using the left-hand rule. Nothing in physics would change, apart some signs in formulas, but in a uniform and predictable way.

Why the minus sign? Because in the exterior algebra it holds that $$ \mathbf{v}\wedge\mathbf{w}=-(\mathbf{w}\wedge\mathbf{v}) $$ and so $\mathbf{k}\wedge\mathbf{i}\mapsto\mathbf{j}$ is much nicer.

There is no way to define an analog of the cross product for spaces that are not $3$-dimensional. However, the lucky coincidence allows for using a handy formalism that's quite similar to an operation on vectors.

It is not a real operation on vectors, because it's not invariant under any change of the reference system, but only with respect to those that preserve orientation.

0
On

I would argue that the formula definition of the cross product is the wrong one to make exactly because it seems arbitrary, so let's create the cross product on $\mathbb R^3$ ourselves.

We want a natural way to pair vectors $v$ and $w.$ Well, we're in three dimensions, so an obvious candidate for $v\times w$ might be something which is orthogonal to both $v$ and $w.$ Here we are using the fact that we have a standard inner product on $\mathbb R^3,$ the dot product, which tells us which vectors are orthogonal to which other vectors. There is of course an entire line orthogonal to both $v$ and $w,$ but now we only need to pick which vector on this line will be $v\times w.$ What should it's magnitude be? $|v\times w|=|v||w|$ seems like the only sensible choice. We could also do $|2v||50w|$ but that feels more arbitrary, so $|v||w|$ it is. Great, so now we have two vectors to pick from (i.e. a vector on the line orthogonal to $v$ and $w$ with maginitude $|v||w|$ or its negative). Now we use the fact that $\mathbb R^3$ comes with a standard orientation.

An orientation of a vector space $V$ is a en equivalence class $\mu$ of ordered bases of $V,$ where two bases $(v_1,\ldots, v_n)$ and $(w_1,\ldots, w_n)$ are equivalent if the linear transformation $v_i\mapsto w_i$ has positive determinant.

It's a simple exercise to show that there are only two orientations on a vector space, and they behave like you would expect. Rotations, translations, and scaling preserve orientation while reflections reverse orientation. $\mathbb R^3$ has a standard orientation, namely the one given by the vectors $$\left(\begin{pmatrix}1\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\0\end{pmatrix},\begin{pmatrix}0\\0\\1\end{pmatrix}\right).$$ You'll notice that this follows the right hand rule, i.e. if you take a vector along with the next one, then you can use the right hand rule to find the direction of the third. This uniquely determines the orientation of $\mathbb R^3$ actually, so there is a "right-hand" orientation and a "left-hand" orientation. We picked the right-hand one because the church thought lefties were evil or something (I didn't pay attention in history class when I was in school).

Anyway, my point is that we want to keep the orientation that we start with, so we pick for $v\times w$ the vector for which $(v,w,v\times w)$ satisfies the right hand rule.

Okay great! This is a sensible definition of product that avoids as many arbitrary choices as possible, and anything we did choose we tried to make as natural as possible. But why should this completely reasonable product be given by the formula you gave? The deeper reason has to do with exterior algebras (see @egreg's answer). If we already know the formula then we can just verify that it agrees with our definition: use the dot product to verify that your formula makes $v\times w$ orthogonal to $v$ and $w,$ check that the norm satisfies $|v\times w|=|v||w|,$ and then verify that the matrix $$\begin{pmatrix} v_1 & w_1 & \begin{vmatrix} v_2 & v_3\\ w_2 & w_3\end{vmatrix}\\ v_2 & w_2 & \begin{vmatrix} v_1 & v_3\\ w_1 & w_3\end{vmatrix} \\ v_3 & w_3 & \begin{vmatrix} v_1 & v_2\\ w_1 & w_2\end{vmatrix}\end{pmatrix}$$ has positive determinant. Since that determines $|v\times w|$ uniquely, this is the one we're looking for.

So the real reason that the right hand rule is satisfied is because we chose it to be, and the formula that you give is an artifact of that.

0
On

Here is a characterization of cross-product types of operations in terms of one algebraic property and two geometric properties.

We want to associate to each pair of vectors $v$ and $w$ a third vector $f(v,w)$ such that (i) $f(v,w)$ is perpendicular to $v$ and $w$, (ii) $f(v,w)$ is linear in each of $v$ and $w$ when the other is fixed, and (iii) rotations of $\mathbf R^3$ that fix the origin commute with this operation.

Property (ii) means: $f(v_1 + v_2,w) = f(v_1,w) + f(v_2,w)$ for all $v_1, v_2, w$ in $\mathbf R^3$, $f(v,w_1 + w_2) = f(v,w_1) + f(v,w_2)$ for all $v, w_1, w_2$ in $\mathbf R^3$, and $f(cv,w) = cf(v,w)$ and $f(v,cw) = cf(v,w)$ for all $c \in \mathbf R$ and $v, w \in \mathbf R^3$.

Property (iii) means $f(Rv,Rw) = R(f(v,w))$ for all rotations $R$ of $\mathbf R^3$ fixing the origin that preserve orientation ($\det(R)$ is $1$, not $-1$).

Note I am not directly assuming $|f(v,w)| = |v||w|$.

Theorem. A function $f \colon \mathbf R^3 \times \mathbf R^3 \to \mathbf R^3$ satisfies (i), (ii), and (iii) if and only if $f$ is a scalar multiple of the cross product on $\mathbf R^3$: there is $a \in \mathbf R$ such that $f(v,w) = a(v \times w)$ for all $v, w \in \mathbf R^3$.

Therefore by adding the condition $f(\mathbf i,\mathbf j) = \mathbf k$, $a$ must be $1$ so $f$ is the cross product.