Cross product in > 3d

820 Views Asked by At

What exactly would we get by calculating the cross product of vectors in $R^n, n>3$

using the formula $\vec a \times\vec b=(||\vec a||||\vec b||\sin\theta)\vec n$

$\vec n$ being a vector normal to the two 3D vectors $\vec a$ and $\vec b$

For a set of n-1 vectors that are n dimensional, would there be a generalization of the cross product?

2

There are 2 best solutions below

0
On BEST ANSWER

Why not use the matrix notation: $$\vec a\times \vec b=\left\vert\begin{array}{ccc}\vec i&\vec j&\vec k\\a_1&a_2&a_3\\b_1&b_2&b_3\end{array}\right\vert.$$ Now in $\mathbb{R}^4$ you have $$\vec a\times \vec b\times \vec c=\left\vert\begin{array}{cccc}\vec i&\ \vec j&\vec k&\vec l\\a_1&a_2&a_3&a_4\\b_1&b_2&b_3&b_4\\c_1&c_2&c_3&c_4\end{array}\right\vert,$$ where $\vec i,\vec j,\vec k,\vec l$ are the basis vectors for $\mathbb{R}^4$.

This is easily expanded to any dimension $n$.

In practice I wouldn't use the (possibly confusing) notation $$\vec a\times \vec b\times \vec c,$$ I would write something like $$\text{cross}(\vec a,\vec b,\vec c).$$

0
On

Jeff's answer is good, but I feel that it hides what is really going on a little bit.

Given $n-1$ vectors $v_1,v_2,...,v_{n-1}$ in $\mathbb{R}^n$, you can form a linear map

$$L: \mathbb{R}^n \to \mathbb{R}$$ by the rule

$$L(w) = \det(v_1,v_2, \ldots ,v_{n-1},w)$$

In other words, $L$ is the linear map you get by partially applying the determinant. It is linear because the determinant is linear in each slot, in particular the last one.

Any linear map $\mathbb{R}^n \to \mathbb{R}$ is just a row vector, so its transpose is a column vector which represents the linear map. In other words, there is a vector $\operatorname{Cross}(v_1,v_2, \ldots ,v_{n-1}) \in \mathbb{R}^n$ such that

$$\operatorname{Cross}(v_1,v_2, \ldots ,v_n) \cdot w = L(w) = \det(v_1,v_2, \ldots ,v_{n-1},w)$$

Now all the properties of the cross product flow from this one equation:

  1. $\operatorname{Cross}(v_1,v_2, \ldots ,v_{n-1}) \perp v_i$ for all $i = 1,2, \ldots ,n-1$ because $\operatorname{Cross}(v_1,v_2, \ldots ,v_{n-1}) \cdot v_i = \det(v_1,v_2,...,v_{n-1},v_i) = 0$, because the determinant is alternating.

  2. Let $C = \operatorname{Cross}(v_1,v_2, \ldots ,v_{n-1})$.
    Since $C$ is perpendicular to all of the $v_i$, the volume of the $n$-dimensional parallelepiped $P_n$ spanned by $v_1,v_2, \ldots ,v_{n-1},C$ is just the length of $C$ times the $n-1$-dimensional volume of the parallelepiped $P_{n-1}$ spanned by $v_1,v_2, \ldots , v_{n-1}$. But we can also think of the volume of $P_n$ as being given by the determinant of $(v_1,v_2, \ldots ,v_n,C)$, which in turn is just $C \cdot C$ by the defining property of the cross product. So we have $$C \cdot C = |C| \operatorname{Vol}(P_{n-1})$$ $$|C|^2 = |C| \operatorname{Vol}(P_{n-1})$$ $$|C| = \operatorname{Vol}(P_{n-1})$$ So just like in the 3-dimensional case, the length of the cross product is the $n-1$-dimensional volume of the parallelepiped spanned by the vectors going into the cross product.

  3. $C$ is placed in the orientation so that $\det(v_1,v_2, \ldots ,v_{n-1},C)$ is positive, because that is $C \cdot C$ which must be positive.

We see that the $n$-dimensional cross product enjoys all of the features you know and love about the cross product in $\mathbb{R}^3$: It is perpendicular to all $n-1$ vectors, its length is the volume spanned by those vectors, and $(v_1,v_2, \ldots ,v_{n-1},C)$ is positively oriented.

This is not just abstract nonsense. Try actually computing some cross products using this rule. All you need to get the row vector for $L$ is to put the basis vectors into the defining equation for $L$! This is what is behind this "take a determinant with vectors in some of the slots" business. The way I have told the story, the vectors would be more naturally viewed as column vectors, with $i,j,k,$ etc. occurring as the last column. Of course, this computes the same thing as Jeff mentioned above.