How to deduce this formula using differential forms?

463 Views Asked by At

There's a formula from vector calculus that seems terrible to deduce. This formula is:

$$\nabla\times (A\times B)=(B\cdot\nabla )A-(A\cdot \nabla)B+A (\nabla\cdot B)-B(\nabla\cdot A)$$

Deducing it by computing explicitly the left hand side and getting to the right hand side is feasible but seems complicated, verbose and coordinate dependent. Is it possible to get this from differential form calculus?

I mean, taking the curl of a vector field is equivalent as taking the exterior derivative of a one form. Taking the cross product of vector fields is equivalent to take the wedge product of one forms.

So we would be computing something like $d(\omega\wedge \eta)$ which we know how to compute. The problem is that $\omega \wedge \eta$ should be a $1$-form for all of this to work out.

I really don't know how to do it. Is it feasible? If so, how can we deduce this?

3

There are 3 best solutions below

0
On

There's an easy way to prove it using Cartesian Tensor Notation! Look at (2.10): http://phys.columbia.edu/~cheung/courses/MMSP2014/PS/s14_sol01.pdf

I hope this is what you were looking for

3
On

This can be answered easily with geometric calculus.

The cross product is a duality operation:

$$A \times B = -\epsilon (A \wedge B)$$

The same goes for the curl:

$$\nabla \times (A \times B) = -\epsilon \nabla \wedge [-\epsilon(A \wedge B)]$$

The 3-vector $\epsilon$ can be pulled out of the wedge product, at the cost of turning it into a contraction instead.

$$\epsilon^2 \nabla \cdot [A \wedge B] = -\nabla \cdot [A \wedge B]$$

This uses a dot notationally, but the result on the RHS is a vector. The dot is a contraction operation; $\nabla$ is a 1-vector, $A \wedge B$ is a 2-vector. The difference between them is a 1-vector (2-1=1)--or, you know, just a vector.

From here, you can just use an analogue of the BAC-CAB rule:

$$X \cdot [Y \wedge Z ] = (X \cdot Y) Z - (X \cdot Z) Y$$

Applying this to the derivative and imposing the product rule gives

$$\nabla \cdot [A \wedge B ] = (\dot \nabla \cdot A) \dot B + (\nabla \cdot A) B - (\dot \nabla \cdot B)\dot A - (\nabla \cdot B) A$$

What's up with the dots, you ask? Well, $\nabla$ should differentiate both vector fields. Our BAC-CAB rule got us 2 terms, but $\nabla$ has to differentiate both $A$ and $B$ separately. In $(\dot \nabla \cdot A) \dot B$, the dots denote that $\nabla$ differentiates $B$ only, despite being next to $A$. This keeps people from thinking this is a divergence multiplying a vector.

Now, notice that $(\dot \nabla \cdot A) \dot B = (A \cdot \nabla) B$, and you're done. Remember the minus sign we discarded at the beginning to get the stated result.

0
On

Daniel Fischer is right. To perform this calculation with differential forms, you need to use the "Hodge dual", or "star operator". In three dimensions, it transforms a 2-form $\omega = X\,dy\wedge dz + Y\,dz\wedge dx + Z\,dx\wedge dy$ into the 1-form $*\omega = X\,dx + Y\,dy + Z\,dz$ (and viceversa).

In general: $*(dx\wedge dy)= dz$, and so on, cyclically. Using the Levi-Civita symbol notation, and the summation convention on $k$: $$ *(dx^i\wedge dx^j) = \epsilon_{ijk}\,dx^k. $$

If the indices seem misplaced, it is because it is an operation involving the (Euclidean) metric. Writing it as the "curl of a cross product" this is not explicit, but still true. When an operation involves the metric, using differential forms is still convenient, but not as much as in other cases.

Let $A = a_i\,dx^i$ and $B=b_j\,dx^j$ (summation convention). Consider the 2-form: $$ C = d*(A\wedge B). $$

Now, since you are looking for a vector, or a 1-form in our case, you should again take the dual, $*C$.

Writing out: $$ *C = *d*(a_i\,dx^i\wedge b_j\,dx^j) = *d*(a_i\,b_j\,dx^i\wedge dx^j). $$

Using the formula above: $$ *C = *d(a_i\,b_j*(dx^i\wedge dx^j)) = \epsilon_{ijk}*d(a_i\,b_j\,dx^k). $$

Now deriving: $$ *C= \epsilon_{ijk}*d(a_i\,b_j)\wedge dx^k = \epsilon_{ijk}\frac{\partial(a_i\,b_j)}{\partial x^l}*dx^l\wedge dx^k. $$

Applying the dual once again: $$ *C = \epsilon_{ijk}\epsilon_{lkm} \frac{\partial(a_i\,b_j)}{\partial x^l}\,dx^m = -\epsilon_{ijk}\epsilon_{klm} \frac{\partial(a_i\,b_j)}{\partial x^l}\,dx^m. $$

By symmetry considerations (or by very tedious calculations), this identity holds: $$ \epsilon_{ijk}\epsilon_{klm} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl}, $$

where $\delta_{ij}$, the Kronecker delta, is simply the components of the identity matrix. So: $$ *C = (\delta_{im}\delta_{jl} - \delta_{il}\delta_{jm}) \frac{\partial(a_i\,b_j)}{\partial x^l}\,dx^m. $$

Expliciting: $$ *C = \frac{\partial(a_i\,b_j)}{\partial x^j}\,dx^i - \frac{\partial(a_i\,b_j)}{\partial x^i}\,dx^j. $$

The first term is: $$ \frac{\partial(a_i)}{\partial x^j}\,b_j\,dx^i + \frac{\partial(b_i)}{\partial x^j}\,a_j\,dx^i, $$

which is the 1-form of components (summing on $j$): $$ \omega_i = (b_j\cdot\partial_j)a_i + (a_j\cdot \partial_j)b_i, $$

corresponding to the vector (look at the indices): $$ (B\cdot\nabla)A + (A\cdot \nabla)B. $$

The second ferm is: $$ - \frac{\partial(a_i)}{\partial x^i}\,b_j\,dx^j - \frac{\partial(b_i)}{\partial x^i}\,a_j\,dx^j, $$

which is the 1-form of components (summing on $j$): $$ \phi_i = - (\partial_ja_j)b_i - (\partial_jb_j)a_i, $$

corresponding to the vector: $$ -(\nabla\cdot A)B - (\nabla\cdot B)A. $$

Summing up the two terms, you get exactly the vector you wanted.