Express scalar triple product a^T(b\times c) in inner products

358 Views Asked by At

For $a,b,c,d\in\mathbb{R}^n$, many cross-product expressions can be written purely in inner and vector products, e.g., $$ \begin{split} a\times(b\times c) &= b\langle a, c\rangle - c\langle a, b\rangle,\\ \langle a\times b, c\times d\rangle &= \langle a, c\rangle\langle b, d\rangle - \langle a, d\rangle \langle b, c\rangle. \end{split} $$

Is there a way to express the scalar triple product $$ \langle a, b\times c\rangle $$ purely in inner products?

2

There are 2 best solutions below

0
On BEST ANSWER

(Note: Greg's answer gives the same result with a more elegant derivation.)

Turns out there is.

The expression $\langle v_3, v_1\times v_2\rangle$ can be looked at as the component of $v_3$ orthogonal to the space spanned by $v_1$ and $v_2$. In fact, $v_3$ can be dissected into $$ v_3 = \frac{\langle v_3, v_1\times v_2\rangle}{\langle v_1\times v_2, v_1\times v_2\rangle} (v_1\times v_2) \\ + \frac{\langle v_3, v_1\rangle}{\langle v_1, v_1\rangle} v_1 \\ + \frac{\langle v_3, \tilde{v}_2\rangle}{\langle \tilde{v}_2, \tilde{v}_2\rangle} \tilde{v}_2 $$ where $$ \tilde{v}_2 = v_2 - \frac{\langle v_2, v_1\rangle}{\langle v_1, v_1\rangle} v_1 $$ is the part of $v_2$ orthogonal to $v_1$.

Since the set of $v_1\times v_2, v_1, \tilde{v}_2$ is pairwise orthogonal, we have $$ \langle v_3, v_3\rangle = \frac{\langle v_3, v_1\times v_2\rangle^2}{\langle v_1\times v_2, v_1\times v_2\rangle} + \frac{\langle v_3, v_1\rangle^2}{\langle v_1, v_1\rangle} + \frac{\langle v_3, \tilde{v}_2\rangle^2}{\langle \tilde{v}_2, \tilde{v}_2\rangle}. $$ Now it's just a matter of bumping terms around to isolate $\langle v_3, v_1\times v_2\rangle^2$. Note specifically that $$ \langle v_1\times v_2, v_1\times v_2\rangle = \langle v_1, v_1\rangle \langle v_2, v_2\rangle - \langle v_1, v_2\rangle^2 $$ and $$ \langle \tilde{v}_2, \tilde{v}_2\rangle = \frac{\langle v_1, v_1\rangle \langle v_2, v_2\rangle - \langle v_1, v_2\rangle^2}{\langle v_1, v_1\rangle}. $$

(An interesting intermediate step is $$ \langle v_3, v_3\rangle = \frac{\langle v_3, v_1\times v_2\rangle^2}{\langle v_1\times v_2, v_1\times v_2\rangle} + \frac{\langle v_1, v_1\rangle \langle v_2, v_3\rangle^2 + \langle v_2, v_2\rangle \langle v_3, v_1\rangle^2 - 2\langle v_1, v_2\rangle\langle v_2, v_3\rangle\langle v_3, v_1\rangle}{\langle v_1\times v_2, v_1\times v_2\rangle} $$ which splits $v_3$ into components orthogonal and parallel to the plane spanned by $v_1$ and $v_2$.)

Finally we arrive at the nicely symmetric $$ \langle v_3, v_1\times v_2\rangle^2 =\\ \langle v_1, v_1\rangle \langle v_2, v_2\rangle \langle v_3, v_3\rangle + 2 \langle v_1, v_2\rangle \langle v_2, v_3\rangle \langle v_3, v_1\rangle\\ - \langle v_1, v_1\rangle \langle v_2, v_3\rangle^2 - \langle v_2, v_2\rangle \langle v_3, v_1\rangle^2 - \langle v_3, v_3\rangle \langle v_1, v_2\rangle^2. $$ Note that this doesn't say anything about the sign of $\langle v_3, v_1\times v_2\rangle$.


Here is a bit of Python code that supports the claim:

import numpy

v1 = numpy.random.rand(3)
v2 = numpy.random.rand(3)
v3 = numpy.random.rand(3)

val = numpy.dot(v3, numpy.cross(v1, v2))**2
print(val)

v1_dot_v2 = numpy.dot(v1, v2)
v2_dot_v3 = numpy.dot(v2, v3)
v3_dot_v1 = numpy.dot(v3, v1)

v1_dot_v1 = numpy.dot(v1, v1)
v2_dot_v2 = numpy.dot(v2, v2)
v3_dot_v3 = numpy.dot(v3, v3)

val2 = (
    v3_dot_v3 * v1_dot_v1 * v2_dot_v2
    + 2 * v1_dot_v2 * v2_dot_v3 * v3_dot_v1
    - v3_dot_v3 * v1_dot_v2**2
    - v3_dot_v1**2 * v2_dot_v2
    - v2_dot_v3**2 * v1_dot_v1
    )

print(val2)

The cross-product is notoriously slow, so it's in code almost always beneficial to replace it by dot-products. In this case here, you see a speed-up only for smaller vector sizes, though. Note also that one can replace the six separate dot products by one big operation computing 3x3 (partly redundant) dot products. This variant turns out to be faster for small $n$.

enter image description here

import numpy
import perfplot


def setup(n):
    return numpy.random.rand(3, 3, n)


def dot_cross(data):
    v1, v2, v3 = data
    c = numpy.cross(v1.T, v2.T).T
    val = numpy.einsum("ij,ij->j", v3, c) ** 2
    return val


def dot6(data):
    v0, v1, v2 = data

    v1_dot_v2 = numpy.einsum("ij,ij->j", v0, v1)
    v2_dot_v3 = numpy.einsum("ij,ij->j", v1, v2)
    v3_dot_v1 = numpy.einsum("ij,ij->j", v2, v0)

    v1_dot_v1 = numpy.einsum("ij,ij->j", v0, v0)
    v2_dot_v2 = numpy.einsum("ij,ij->j", v1, v1)
    v3_dot_v3 = numpy.einsum("ij,ij->j", v2, v2)

    val = (
        v3_dot_v3 * v1_dot_v1 * v2_dot_v2
        + 2 * v1_dot_v2 * v2_dot_v3 * v3_dot_v1
        - v3_dot_v3 * v1_dot_v2 ** 2
        - v3_dot_v1 ** 2 * v2_dot_v2
        - v2_dot_v3 ** 2 * v1_dot_v1
    )
    return val


def dot3x3(v):
    M = numpy.einsum("ij...,kj...->ik...", v, v)
    val = (
        M[0, 0] * M[1, 1] * M[2, 2]
        + 2 * M[0, 1] * M[1, 2] * M[2, 0]
        - M[0, 1] ** 2 * M[2, 2]
        - M[2, 0] ** 2 * M[1, 1]
        - M[1, 2] ** 2 * M[0, 0]
    )
    return val


perfplot.save(
    "out.png",
    setup=setup,
    n_range=[2 ** k for k in range(20)],
    kernels=[dot_cross, dot6, dot3x3],
)
1
On

The scalar triple product of the three vectors {$a,b,c$} can be thought of as the determinant of a matrix with these vectors as columns, i.e. $$a\cdot(b\times c) = \det([a\;b\;c])$$ We also know that the determinant of a matrix is equal to the determinant of its transpose $$\det(M)=\det(M^T)$$ and that the product of two determinants is equal to the determinant of the product of their matrices $$\det(M)\det(N)=\det(MN)$$ Pulling all of these observations together yields $$\eqalign{ \Big(\det([a\;b\;c])\Big)^2 &= \det([a\;b\;c])\;\det([a\;b\;c]) \\ &= \det([a\;b\;c]^T)\;\det([a\;b\;c]) \\ &= \det\Big([a\;b\;c]^T[a\;b\;c]\Big) \\ &= \det\pmatrix{a^Ta&a^Tb&a^Tc\\b^Ta&b^Tb&b^Tc\\c^Ta&c^Tb&c^Tc} \\ }$$ The matrix in the final determinant is made up entirely of dot products of the three vectors. And, of course, it can be written as a linear combination of these scalar products via the Laplace expansion of the determinant.