General Strategy for Calculating Change of Coordinates with Differential Forms

60 Views Asked by At

I'm really confused about calculating "change in coordinates" in my integration on manifolds class. Suppose $$\omega = \sum_kf_k{\eta^k},$$ where $\eta^k=(-1)^{k+1}\mathrm{d}x \wedge \ldots \wedge \widehat{\mathrm{d}x^k} \wedge \ldots \wedge \mathrm{d}x^n$ and $f_k$ are smooth functions defined on some open subset of $\mathbb{R}^n.$ If $y^j$ are different coordinates, related to $x^k$ by a smooth change of coordinates, how does one find the coefficients $\tilde{f_k}$ in terms of $y^k$?

First, what exactly is $\eta^k$? It is used everywhere in the proofs of the multivariable integral theorems in my textbook (Mathematical Analysis by A. Browder), and the author writes

Since $\Lambda^n(\mathbb{R}^{n*})$ is one-dimensional, to each $n$-form on $U$ we can associate a real-valued function on $U$. Since $\Lambda^{n-1}(\mathbb{R}^{n*})$, as well as $\Lambda(\mathbb{R}^{n*})=\mathbb{R}^{n*}$ is a space of dimension $n$, to each $(n-1)$ form on $U,$ as well as to each $1$-form on $U$, we can associate a vector field on $U$. We next spell out such a correspondence, a relate it to the differential operator $d$.

just before defining it. What exactly does he mean? Why is $\Lambda^n(\mathbb{R}^{n*})$ one-dimensional? Does $\eta^k$ have a name since it used so often?

Second, how does one even go about calculating change of coordinates with $\omega$? This is a "practice problem" (not collected) for my class, and, when I asked my professor about it, he basically thought the problem was obvious and did not go into any details, but said I should find out that things are just related by a contravariant rule. Embarrassed, I returned home and still couldn't figure out what to do! So, how do I approach calculating change of coordinates with differential forms?

1

There are 1 best solutions below

2
On

At the nuts and bolts procedural level consider first this example.

If $ x_1= 3 y_1 + 2 y_2 $ and $ x_2 = 4 y_1 + 2 y_2$ then their small displacements are related in a similar way.

The change of variables for the two form

$ dx_1\wedge dx_2 =( 3 dy_1 + 2 dy_2)\wedge ( 4 dy_1 + 2 dy_2)$

is found by expanding using these skew-symmetric simplification rules:

$dx_2\wedge dx_1=-dx_1 \wedge dx_2$ (anti-commutativity)

and its close relative

$ dx_1\wedge dx_1 =0 =dx_2\wedge dx_2$. (self-wedges vanish)

Traditionally these are sorted with subscripts in increasing order.

The net result after such sorting is $dx_1\wedge dx_2=(6-8) dy_1\wedge dy_2 = -2 dy_1\wedge dy_2$.

If you have many more variables you just have more such wedge products to expand out. What is really going on? Wedge products are a sneaky way to create and manipulate determinants and sub-determinants. Recall that flipping the order of two adjacent rows or columns reverses the sign of a determinant; also, a matrix with two identical rows has zero determinant. And basically that is all that one needs to know to expand out determinants. The expansion rules cited above compress all that algorithmic information into two simple algebra identities.

P.S. In the last example, the change of coordinates was linear. More generally, for nonlinear relations, you proceed as above except the numerical values in these relations are replaced by partial derivatives. That is each small differential change $dx=df$ is expanded using the formula $df =\sum _k \frac{\partial f}{\partial y_k} \ dy_k$.