derivative formula $\nabla \times (\mathbf{a} \times \mathbf{r}) = \nabla \cdot(\mathbf{a} \wedge\mathbf{r}) = (n-1)\mathbf{a}$

183 Views Asked by At

Assume $\mathbf{r}=\mathbf{x}−\mathbf{x}′$ is the position vector in $\mathbb{R}^n$, for constant $\mathbf{a}$, we have

$$\nabla \times (\mathbf{a} \times \mathbf{r}) = \nabla \cdot(\mathbf{a} \wedge\mathbf{r}) = (n-1)\mathbf{a}.$$

This comes from fig.6 in Tutorial on Geometric Calculus by David Hestenes. Can anyone help me to derive it? I thought that: $\nabla \cdot(\mathbf{a} \wedge\mathbf{r}) = \epsilon^{ijk}\partial_i a_j x_k = 0$.

3

There are 3 best solutions below

3
On BEST ANSWER

The left hand side comes from Hodge dual of subspaces in 3 dimension. That is, $$a\times b=(a\wedge b)^*=a\wedge bI^{-1}$$ where $I=e_1e_2e_3$ be unit pseudoscalar. Hence we have $$\nabla\times(a\times r)=\nabla\wedge(a\wedge rI^{-1})I^{-1}=\nabla\cdot(a\wedge r)I^{-1}I^{-1}=-\nabla\cdot(a\wedge r)$$ And the right hand side, in Hestenes' geometric algebra, $\nabla=e_i\partial^i$ is regarded as a vector. Then $$\begin{align}\nabla\cdot(a\wedge r)&=e_i\cdot\partial^i(a\wedge r)\\&=e_i\cdot(a\wedge\partial^ir)\\&=e_i\cdot(a\wedge e^i)\\&=(e_i\cdot a)e^i-(e_i\cdot e^i)a\\&=a-na\\&=(1-n)a\end{align}$$

To be compared with $\nabla\cdot(a\times r)$ in vector algebra, write $$\begin{align}\nabla\cdot(a\times r)=&\epsilon^{ijk}\partial_i(a_jr_k)=\epsilon^{ijk}a_j\partial_ir_k=\epsilon^{ijk}a_j\delta_{ik}=0\end{align}$$ or in geometric algebra $$\nabla\cdot(a\times r)=\nabla\cdot(a\wedge rI^{-1})=\nabla\wedge(a\wedge r)I^{-1}=a\wedge(\nabla\wedge r)I=0$$

0
On

Also consider a proof without breaking into a basis:

$$\langle \nabla ar \rangle_1 = (a \cdot \nabla) r - (a \wedge \nabla) \cdot r = \nabla \cdot (a \wedge r) + \nabla (a \cdot r)$$

You might already know that $a \cdot \nabla r = a$. You might already know the BAC-CAB rule (which can be proved by cyclic permutation, without breaking into a basis); it can be applied on the second term to get $-a (\nabla \cdot r) + \nabla(a \cdot r)$. The first of those is just $-na$, and the gradients cancel. The result you get is $(1-n) a$ and not $(n-1)a$.

0
On

I would think it is still applicable for traditional vector analysis approach. With $\mathbf{r}\in\mathbb{R}^n$, then \begin{align} [\nabla\times(\mathbf{a}\times\mathbf{r})]&=\varepsilon_{ijk}\partial_j[\mathbf{a}\times\mathbf{r}]_k=\varepsilon_{ijk}\partial_j(\varepsilon_{klm}a_{\ell}r_m)\\ &=\varepsilon_{ijk}\varepsilon_{klm}(r_m\partial_ja_{\ell}+a_{\ell}\partial_jr_m)\\ &=(\delta_i^{\ell}\delta_j^m-\delta_i^m\delta_j^{\ell})(r_m\partial_ja_{\ell}+a_{\ell}\partial_jr_m)\\ &=r_j\partial_ja+a\partial_jr_j-r\partial_ja_j-a_j\partial_j{r}\\&=(\mathbf{r}\cdot\nabla)\mathbf{a}+\mathbf{a}(\nabla\cdot\mathbf{r})-\mathbf{r}(\nabla\cdot\mathbf{a})-(\mathbf{a}\cdot\nabla)\mathbf{r}\\ &=\mathbf{a}(\nabla\cdot\mathbf{r})-(\mathbf{a}\cdot\nabla)\mathbf{r}\\ &=n\mathbf{a}-\mathbf{a}\\ &=(n-1)\mathbf{a} \end{align} (where I have mixed lower and upper indices).