Understanding multiplication of differentials

665 Views Asked by At

I'm currently reading Barrett O'Neill's book on differential geometry. In the book, he defined the differential $df$ of $f$ as a 1-form. And since a 1-form is $\phi:T(\mathbb R^3)\rightarrow\mathbb R$. I don't quite understand why there is an alternation rule for differentials when we multiply them: $$ dx_idx_j=-dx_jdx_i$$

If we just pointwise multiply the real valued differentials, then how comes the negative sign? Hope somebody can help me sort this out.

P.S. These are the definitions in my book enter image description here

enter image description here

1

There are 1 best solutions below

12
On BEST ANSWER

It has to do with the fact that you are going to use 2-forms to compute surface areas. The simplest surface area to compute is that of a rhombus, and you can compute the area of a rhombus defined by $v_1, v_2$ (that is, the convex hull of $0, v_1, v_2, v_1 + v_2$) using the determinant of defining vectors: $det(v_1, v_2)$. (Its a good exercise to verify this.)

This computation is the infinitesimal step when you are computing surface areas. Finally, the connection to the signs is that the determinant of a matrix A has this alternating property when you swap two columns in A. Interchanging the two one forms is like swapping the columns.

So, this sign rule is a formal idea that makes this connection to area computation work.

Keep in mind: zero forms may be functions on the manifold, but 1 forms are not (real valued) functions on the manifold. In particular, we can define the multiplication of 1 forms as we like, and differential geometer's defined it in a way that makes the connection with surface area work. (And then lots of other wonderful things come out of these sign conventions, like Stokes theorem and De Rham cohomology.)

Added from comments below:

A key question is : what is the domain in mind for 2 forms?

Let me put it in linear algebra terms. Suppose you have two linear functionals on V, f and g. What do you mean by fg? If you just want to put in one vectors, then (fg)(x) = f(x)g(x) is pretty natural. But here we want to put in more than one vectors ... So what do we do? Here are some possibilities: (fg)(x,y) = f(x)g(y), (fg)(x,y) = f(x)g(y) + f(y)g(x) (symmetric) , (fg) = f(x)g(y) - f(y)g(x) (anti symmetric). Both the determinant and differential forms fit into the latter kind. (Note that the determinant on 2 dim vectors is an anti symmetric function of a pair of vectors...)

For more on this, you can look up the exterior powers and symmetric powers. Making this excursion into linear algebra is worth it, since these constructions come up a lot.