How does computing the determinant of a matrix with unit vectors give you the Cross Product?

6.4k Views Asked by At

Say you had $(a_x,a_y,a_z)\times(b_x,b_y,b_z)$, you would set up a matrix like the following:

enter image description here

And the resulting would be your Cross Product or the coordinates of an orthogonal vector. My question is why? Why does forming it that way give you the magnitude of an orthogonal vector and how is it related to the $\sin(\theta)$ definition of Cross Product.

4

There are 4 best solutions below

0
On BEST ANSWER

And the resulting would be your Cross Product or the coordinates of an orthogonal vector. My question is why? Why does forming it that way give you the magnitude of an orthogonal vector

Your last equation can be written as $$ (a \times b)_i = \epsilon_{ijk} a_j b_k \quad (1) $$ where $\epsilon_{ijk}$ is the skew-symmetric or Levi-Civita tensor and the Einstein summation convention is used (we sum over same indices, here: $j$ and $k$, each from $1$ to $3$).

If the tuple $(i,j,k)$ consists of different numbers from $\{1, 2, 3\}$, thus it is a permutation of $(1,2,3)$, it is defined as sign of the permutation $\pm1$, otherwise it vanishes.

So the above is the compact notation for \begin{align} (a \times b)_1 &= \epsilon_{123} a_2 b_3 + \epsilon_{132} a_3 b_2 = a_2 b_3 - a_3 b_2 \\ (a \times b)_2 &= \epsilon_{231} a_3 b_1 + \epsilon_{213} a_1 b_3 = a_3 b_1 - a_1 b_3 \\ (a \times b)_3 &= \epsilon_{312} a_1 b_2 + \epsilon_{321} a_2 b_1 = a_1 b_2 - a_2 b_1 \end{align}

Compare this with the definition of the determinant, which is an alternating multilinear form in its $n$ arguments: $$ \det A = \det(a_1, \dotsc, a_n) = \epsilon_{i_1 i_2 \dotsm i_n} a_{1 i_1} a_{2 i_2} \dotsm a_{n i_n} \quad (2) $$ It is the signed sum of all permutations of the components.

So indeed it happens that $$ \det(e_i, a, b) = e_i \cdot (a \times b) = (a \times b)_i \quad (3) $$ where $e_i$ is the $i$-th canonical base vector \begin{align} e_1 = (1, 0, 0)^T \\ e_2 = (0, 1, 0)^T \\ e_3 = (0, 0, 1)^T \end{align}

The form you used employs the rule of Sarrus to calculate the determinant, which holds only for three dimensions. Equation $(2)$ holds for arbitrary dimension.

The definition of the vector product, equation $(1)$, consists of signed sums of permutations of the involved vector components and the definition of the determinant $(2)$ also makes uses of signed sum of permutations of its argument components. That way it happens that one can define a vector product as determinant.

and how is it related to the $\sin(\theta)$ definition of Cross Product.

You probably mean $$ \lVert a \times b \rVert = \lVert a \rVert \lVert b \rVert \sin\angle(a, b) \quad (4) $$ This derives from the triple product $$ a \cdot (b \times c) = \det(a, b, c) $$ which can be expressed as determinant, we used it above with the $i$-th canonical base vector for equation $(3)$. The determinant gives the volume of the parallelepiped (think tower of Pisa for a stack of cards) formed by the vectors $a, b, c$.

From this one can derive equation $(4)$.

0
On

Looking at the computation from the right angle, what you compute is a new vector in the dual space, such that $\vec a \times \vec b$ maps any vector $\vec c$ to $\mathbb R$ (or $\mathbb C$), in a way that $\det(\vec c, \vec a,\vec b)=(\vec a\times\vec b)\cdot \vec c$.

You can see this by replacing $(i,j,k)$ by $(c_1, c_2, c_3)$. Everything else follows from the properties of $\det$.

Especially $\vec a\times\vec b$ is orthogonal to $\vec a$ because $\det(\vec a, \vec a, \vec b)=0$, similar for $\vec b$.

0
On

If you start with the definition of cross-product as $$\underline{a}\times\underline{b}=|\underline{a}||\underline{b}|\sin \theta \underline{\hat{n}},$$ where $\theta$ is the angle between $\underline{a}$ and $\underline{b}$ and $\underline{\hat{n}}$ is the unit vector perpendicular to $\underline{a}$ and $\underline{b}$ in the sense of a right-hand triad' then it follows from this definition that:

  1. $\underline{i}\times\underline{j}=0=\underline{j}\times\underline{j}=\underline{k}\times\underline{k}$

2. $\underline{i}\times\underline{j}=\underline{k}$ and $\underline{j}\times\underline{k}=\underline{i}$ and $\underline{k}\times\underline{i}=\underline{j}$

  1. If the letters are in anticyclic order, the result is correspondingly negative, so, for example, $\underline{j}\times\underline{i}=-\underline{k}$ and so on.

Therefore, if we assume the distributivity of cross-product (not proved here), then the cross product of the two vectors gives exactly the same result as you obtain from evaluating the determinant.

0
On

The determinant of a $3\times3$ matrix can be viewed as the triple product of its columns (or rows): $$ \begin{align} \det\begin{bmatrix} x_1&y_1&z_1\\ x_2&y_2&z_2\\ x_3&y_3&z_3 \end{bmatrix} &= \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix} \times \begin{bmatrix} y_1\\ y_2\\ y_3 \end{bmatrix} \cdot \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}\\ &= \begin{bmatrix} (x\times y)_1\\ (x\times y)_2\\ (x\times y)_3 \end{bmatrix} \cdot \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}\tag{1} \end{align} $$ If we replace $\begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}$ in $(1)$ by $\begin{bmatrix} \boldsymbol{i}\\ \boldsymbol{j}\\ \boldsymbol{k} \end{bmatrix}$, we get $$ \begin{align} \det\begin{bmatrix} x_1&y_1&\boldsymbol{i}\\ x_2&y_2&\boldsymbol{j}\\ x_3&y_3&\boldsymbol{k} \end{bmatrix} &= \begin{bmatrix} (x\times y)_1\\ (x\times y)_2\\ (x\times y)_3 \end{bmatrix} \cdot \begin{bmatrix} \boldsymbol{i}\\ \boldsymbol{j}\\ \boldsymbol{k} \end{bmatrix}\\[6pt] &=(x\times y)_1\boldsymbol{i}+(x\times y)_2\boldsymbol{j}+(x\times y)_3\boldsymbol{k}\\[18pt] &=x\times y\tag{2} \end{align} $$