Pullback of a differential form

1k Views Asked by At

enter image description here

My question is in regards to a proof in Lee's 'Introduction to Smooth Manifolds'. He proves a lemma about the pullback of a differential form on a manifold $N$, where $F:M\rightarrow N$ is a smooth map between manifolds.

In it he states that 'because the fiber is spanned by $dx^1\wedge\dots\wedge dx^n$, it suffices to show both sides of the equation hold when evaluated on $(\partial_1,\dots,\partial_n)$

I am just trying to understand why this is sufficient? What is the link between the fiber only being spanned by a single basis vector and only having to check for that specific sequence of basis vectors?

I was thinking something along the lines of any other non-trivial sequence of basis vectors (ie no repeated ones) is just a permutation of that sequence and so both sides would differ by a sign. But I'm confident my reasoning is wrong. Namely because he specifically says that the fiber of $\Lambda^n M$ is spanned only by a single basis vector is the reason why we only need to check one specific sequence. My above reasoning would hold regardless of what the basis of the space is.

If someone could clarify that would be much appreciated!

1

There are 1 best solutions below

2
On

Your argument is essentially correct: On an $n$-manifold, the rank of the bundle $\Lambda^n T^*M$ (informally, the bundle of top forms) is $1$. In any local coordinates $(x^i)$, we the $n$-form $dx_1 \wedge \cdots \wedge dx_n$ is nonzero, and hence it locally spans $\Lambda^n T^*M$. (Note that there may be no global nonzero section of this bundle; this is the case precisely iff $M$ is orientable.)

Now, fix a coordinate chart $(x^i)$ containing $p$, which in particular identifies $T_p M \leftrightarrow \mathbb{R}^n$ via $X^i \partial_i \leftrightarrow (X^1, \ldots, X^n)$.

Now, we can regard $\Omega_p \in \Lambda^n T_p^* M$ as a map $$\underbrace{\mathbb{R}^n \times \cdots \times \mathbb{R}^n}_n \to \mathbb{R}.$$ But this map is totally skew in its arguments, so we must have $$\Omega_p: (Y_1, \ldots, Y_n) \mapsto \lambda \det \begin{pmatrix} Y_1 & \cdots & Y_n\end{pmatrix}$$ for some $\lambda \in \mathbb{R}$. Note that substituting gives $\Omega(\partial_1, \ldots, \partial_n) = \lambda$

Now, if we decompose the vectors $Y_j$ as $Y_j = Y_j^i \partial_i$, we have $$\Omega_p(Y_1, \ldots, Y_n) = \lambda \det \begin{pmatrix} Y_j^i \end{pmatrix} = \det \begin{pmatrix} Y_j^i \end{pmatrix} \Omega_p(\partial_1, \ldots, \partial_n).$$ In short, by linearity, to verify the claim it's enough to check $\Omega_p(\partial_1, \ldots, \partial_n)$.