Cross product in 2D by rending k component 0

195 Views Asked by At

I was wondering why the cross product does not work in 2D. For example, would it work if I made the changed the k component to 0 and thus have the cross product take place result in a vector that would only lie on the x and y-axis ?

i.e. If $\\v_1=[a,b,c] $ and $\\v_2=[d,e,f] $

But since the vectors take place in only the $\\x,y-plane $

Let $c=f=0 $

Thus

$\\V_2\times V_1=({{[b(0)-e(0)],[a(0)-d(0)],[a(e)-b(d)]}}) $

Which effectively equates to $\\V_2\times V_1=[0,0,(ae-bd)] $

Am I breaking anything by effectively doing this? Because since surely 2D vectors are just a subset of the 3D vectors, rending the k component a 0, doesn't change anything?

All thoughts are appreciated on my line of reasoning.

3

There are 3 best solutions below

2
On BEST ANSWER

$v_2\times v_1$ is perpendicular to the plane containing $v_1$ and $v_2$. If you choose $v_1,v_2$ to lie in the $xy$-plane, the only vectors perpendicular to this plane are of the form $(0,0,p)$, which explains why $v_2\times v_1$ is of this form.

2
On

Strictly speaking, the cross product is undefined in $\mathbb R^2$. You can embed the plane into $\mathbb R^3$ the way you’re proposing, and yes, the geometric meanings of the cross product still apply. However, you can perform the same computation entirely in $\mathbb R^2$ with a determinant: If $v_1=[a,b]^T$ and $v_2=[c,d]^T$, then $\det[v_1,v_2]=ad-bc$ is exactly the same as the $z$-component of the cross product $[a,b,0]\times[c,d,0]$. This, too, gives you the (signed) area of the paralellogram defined by $v_1$ and $v_2$ and, unlike the cross product, generalizes to higher dimensions. That is, in $\mathbb R^n$, the absolute value of $\det[v_1,\dots,v_n]$ is the volume of the paralellotope defined by these vectors and the sign indicates their relative orientation. If it vanishes, the vectors aren’t linearly independent and the paralellotope “collapses.”

1
On

There is an equivalent of the cross product in 2D, which does the useful things like measuring area, but it produces a scalar quantity (not surprisingly in the case of area, which should be directed in 3D but is a scalar quantity in 2D, like the volume is in 3D). If we define the cross product to be the bilinear map that satisfies $\mathbf{e}_i \times \mathbf{e}_j = \epsilon_{ijk} \mathbf{e}_k$ for the usual basis of $\mathbb{R}^3$, using the Levi-Civita symbol, so it satisfies $$ \mathbf{a} \times \mathbf{b} = (a_i \mathbf{e}_i) \times (b_j \mathbf{e}_j) = \epsilon_{ijk} a_i b_j \mathbf{e}_k $$ The Levi-Civita symbol in $n$ dimensions has $n$ indices, and is antisymmetric in the indices. In particular, the two-dimensional version has $$\epsilon_{12} = - \epsilon_{21} = 1 \\ \epsilon_{11} = \epsilon_{22} = 0 , $$ so we can define a bilinear antisymmetric map by $$ [\mathbf{e}_i,\mathbf{e}_j] = \epsilon_{ij} , $$ so $$ [\mathbf{a},\mathbf{b}] = [a_i \mathbf{e}_i , b_j \mathbf{e}_j] = \epsilon_{ij} a_i b_j = a_1 b_2 - a_2 b_1 , $$ although this is more like a two-dimensional version of the scalar triple product, since it is a map that gives a scalar, that measures the area of the parallelogram generated by $\mathbf{a}$ and $\mathbf{b}$.

Alternatively, we could take a linear map of one vector that gives a "perpendicular", $$ \mathbf{e}_i^{\perp} = \epsilon_{ij} \mathbf{e}_j , $$ which is a more direct analogy, giving a vector that is a fixed perpendicular to the argument (in this case, a rotation by $\pi/2$). Then $$ \mathbf{a}^{\perp} = a_i \mathbf{e}_i^\perp = a_i \epsilon_{ij} \mathbf{e}_{j} = -a_2 \mathbf{e}_1 + a_1 \mathbf{e}_2 . $$

Both of these constructions also extend to $n$ dimensions, so that the perpendicular map always acts on $n-1$ vectors and gives another, $$ (\mathbf{e}_{i_1},\mathbf{e}_{i_2}, \dotsc, \mathbf{e}_{i_{n-1}} )^{\perp} = \epsilon_{i_1 i_2 \dotsm i_{n-1} i_n} \mathbf{e}_{i_n}, $$ where the Levi-Civita symbol is given by the sign of the permutation taking $1,2,\dotsc,n$ to $i_1, i_2 \dotsc i_n $, and the scalar $n$-product/volume map, $$ [\mathbf{e}_{i_1},\mathbf{e}_{i_2}, \dotsc, \mathbf{e}_{i_n} ] = \epsilon_{i_1 i_2 \dotsm i_n} , $$ so $$ [\mathbf{a}^{(1)},\mathbf{a}^{(2)},\dotsc,\mathbf{a}^{(n)}] = \epsilon_{i_1 i_2 \dotsm i_n} a^{(1)}_{i_1} \dotsm a^{(2)}_{i_n} , $$ which you may recognise as the determinant of the matrix with entries $a^{(i)}_{j}$.