For $a,b,c,d\in\mathbb{R}^n$, many cross-product expressions can be written purely in inner and vector products, e.g., $$ \begin{split} a\times(b\times c) &= b\langle a, c\rangle - c\langle a, b\rangle,\\ \langle a\times b, c\times d\rangle &= \langle a, c\rangle\langle b, d\rangle - \langle a, d\rangle \langle b, c\rangle. \end{split} $$
Is there a way to express the scalar triple product $$ \langle a, b\times c\rangle $$ purely in inner products?
(Note: Greg's answer gives the same result with a more elegant derivation.)
Turns out there is.
The expression $\langle v_3, v_1\times v_2\rangle$ can be looked at as the component of $v_3$ orthogonal to the space spanned by $v_1$ and $v_2$. In fact, $v_3$ can be dissected into $$ v_3 = \frac{\langle v_3, v_1\times v_2\rangle}{\langle v_1\times v_2, v_1\times v_2\rangle} (v_1\times v_2) \\ + \frac{\langle v_3, v_1\rangle}{\langle v_1, v_1\rangle} v_1 \\ + \frac{\langle v_3, \tilde{v}_2\rangle}{\langle \tilde{v}_2, \tilde{v}_2\rangle} \tilde{v}_2 $$ where $$ \tilde{v}_2 = v_2 - \frac{\langle v_2, v_1\rangle}{\langle v_1, v_1\rangle} v_1 $$ is the part of $v_2$ orthogonal to $v_1$.
Since the set of $v_1\times v_2, v_1, \tilde{v}_2$ is pairwise orthogonal, we have $$ \langle v_3, v_3\rangle = \frac{\langle v_3, v_1\times v_2\rangle^2}{\langle v_1\times v_2, v_1\times v_2\rangle} + \frac{\langle v_3, v_1\rangle^2}{\langle v_1, v_1\rangle} + \frac{\langle v_3, \tilde{v}_2\rangle^2}{\langle \tilde{v}_2, \tilde{v}_2\rangle}. $$ Now it's just a matter of bumping terms around to isolate $\langle v_3, v_1\times v_2\rangle^2$. Note specifically that $$ \langle v_1\times v_2, v_1\times v_2\rangle = \langle v_1, v_1\rangle \langle v_2, v_2\rangle - \langle v_1, v_2\rangle^2 $$ and $$ \langle \tilde{v}_2, \tilde{v}_2\rangle = \frac{\langle v_1, v_1\rangle \langle v_2, v_2\rangle - \langle v_1, v_2\rangle^2}{\langle v_1, v_1\rangle}. $$
(An interesting intermediate step is $$ \langle v_3, v_3\rangle = \frac{\langle v_3, v_1\times v_2\rangle^2}{\langle v_1\times v_2, v_1\times v_2\rangle} + \frac{\langle v_1, v_1\rangle \langle v_2, v_3\rangle^2 + \langle v_2, v_2\rangle \langle v_3, v_1\rangle^2 - 2\langle v_1, v_2\rangle\langle v_2, v_3\rangle\langle v_3, v_1\rangle}{\langle v_1\times v_2, v_1\times v_2\rangle} $$ which splits $v_3$ into components orthogonal and parallel to the plane spanned by $v_1$ and $v_2$.)
Finally we arrive at the nicely symmetric $$ \langle v_3, v_1\times v_2\rangle^2 =\\ \langle v_1, v_1\rangle \langle v_2, v_2\rangle \langle v_3, v_3\rangle + 2 \langle v_1, v_2\rangle \langle v_2, v_3\rangle \langle v_3, v_1\rangle\\ - \langle v_1, v_1\rangle \langle v_2, v_3\rangle^2 - \langle v_2, v_2\rangle \langle v_3, v_1\rangle^2 - \langle v_3, v_3\rangle \langle v_1, v_2\rangle^2. $$ Note that this doesn't say anything about the sign of $\langle v_3, v_1\times v_2\rangle$.
Here is a bit of Python code that supports the claim:
The cross-product is notoriously slow, so it's in code almost always beneficial to replace it by dot-products. In this case here, you see a speed-up only for smaller vector sizes, though. Note also that one can replace the six separate dot products by one big operation computing 3x3 (partly redundant) dot products. This variant turns out to be faster for small $n$.