Pullback of diffrential forms in surjective submersions

551 Views Asked by At

I am trying to solve the following problem:

Let $M$ and $N$ be smooth manifolds, and suppose $P:M\to N$ is a surjective smooth submersion with connected fibers. Show that given $\alpha \in \Omega^k(M)$ there exists $\beta\in \Omega^k(N)$ such that $\alpha=P^*\beta$ if and only if $\iota_X \alpha=0$ and $L_X \alpha =0$ for every $X \in \ker(dP)$.

Solution

Let $\alpha=P^*\beta$, so by definition of pullback we have that: \begin{align} \alpha_x(X)=\beta_{P(x)}(dP(X)) \end{align} for $x \in M$ and $X\in T_xM$\ If we take $X$ such that $X\in \ker(dP)$, then $\iota_X \alpha = 0$.

To prove that $L_X \alpha =0$, we will use Cartan's magic formula: \begin{align} L_X\alpha= \iota_X d\alpha +d(\iota_X \alpha) \end{align} We already have that the second term is zero for $X\in ker(dP)$.\ Using the fact that $d(P^*\beta)=P^*d\beta$, and the definition of pullback we get that the first term in Cartan's formula becomes $d\beta_{P(x)}(dP(X))=0$ if $X\in \ker(dP)$.

Thus we have that $L_X \alpha =0$ for all $X\in \ker(dP)$.

Conversely, let us define the pullback in terms of local coordinates with $P:M\to N$ as $P(x_1,...,x_n,...,x_m)=(x_1,...,x_n)$.

Given $\alpha \in \Omega^k(M)$, we can write it as \begin{align} \alpha = \sum_{1}^{m}f_I(x_1,...,x_m) dx_I \end{align} for $I=(x_1,...,x_m)$. By hypothesis, $\iota_X \alpha =0$ for all $X\in \ker(dP)$. By the way we defined the pullback, the expression $X\in \ker(dP)$ means that $X=c_iX_i$, where $X_i=0$ for $n<i\leq m$.

The second condition $L_X \alpha =0$, with cartan's formula and the first condition means that $\iota_X d\alpha=0$. Replacing we get: \begin{align} d\alpha = \sum_{I,j}^{m}\dfrac{f_I(x_1,...,x_n)}{\partial(x_i)}dx_j\wedge dx_I \end{align} Evaluating on $X\in \ker(dP)$, we get that the change of $f_I$ in the direction of $x_i$ is zero for $n<i\leq m$. Then our conditions on alpha means that $f_I(x_1, ...,x_m)= f_I(x_1,...,x_n,0,...,0)$ for all $X\in \ker(dP)$.

Let $g_J= f_I\circ P$, so we have that: \begin{align} P^*\alpha = \sum_{I}^{m} f_I\circ P(x_1,...,x_m) dx_I=\sum_{J}^{n} g_J(x_1,...,x_n) dx_J=\beta \end{align}
Thus completing the proof.

I want to check this proof as I am not confident on the second part. I think I am missing something on how the $dx_i$ with $n<i\leq m$ dissappear. Also I think I am messing something up with the indices but I can't say what exactly. My professor explained me the main idea of the second part of the proof. I think I understand the general idea, but not enough to put it in rigorous terms.

1

There are 1 best solutions below

5
On

It doesn't look so good to me. We don't necessarily have $k=1$. Let's organize ourselves:

  • If $\alpha = P^*\beta$, then $X \in \ker {\rm d}P \require{cancel}$ says that $$\iota_X\alpha(X_1,\ldots, X_{k-1}) = \alpha(X,X_1,\ldots,X_k) = \beta(\cancelto{0}{{\rm d}P(X)},{\rm d}P(X_1),\ldots, {\rm d}P(X_{k-1})) = 0,$$so $\iota_X\alpha=0$. Also, we have that $$\begin{align} \mathcal{L}_X\alpha(X_1,\ldots,X_k) &= X(\alpha(X_1,\ldots,X_k)) - \sum_{i=1}^n \alpha(X_1,\ldots,[X,X_i],\ldots, X_k) \\ &= X(\beta({\rm d}P(X_1),\ldots, {\rm d}P(X_k))) - \sum_{i=1}^n \beta({\rm d}P(X_1),\ldots,{\rm d}P([X,X_i]),\ldots, {\rm d}P(X_k)) \\ &= \cancelto{0}{X(\beta({\rm d}P(X_1),\ldots, {\rm d}P(X_k)))} - \sum_{i=1}^n \beta({\rm d}P(X_1),\ldots,[\cancelto{0}{{\rm d}P(X)},{\rm d}P(X_i)]),\ldots, {\rm d}P(X_k))\\ &= 0.\end{align}$$
  • Assume $\iota_X\alpha=0$ and $\mathcal{L}_X\alpha=0$ for all $X \in \ker {\rm d}P$. I don't have much time to go over all details now, but here's the idea: define $\beta$ locally using the charts given by the constant rank theorem -- $\mathcal{L}_X\alpha=0$ will allow us to define $\beta$ along the whole chart around $P(x)$ in $N$ via the flows of the vector fields $\partial_i$, with $n+1 \leq i \leq m$, since $(\mathcal{L}_{\partial_i}\alpha)_{i_1\cdots i_k} = \partial \alpha_{i_1 \cdots i_k}/\partial x^i$ (which you seem to have done, when writing $g_{i_1 \cdots i_k}(x_1,\ldots, x_n) = f_{i_1\cdots i_k}(x_1,\ldots,x_n,0\ldots,0)$), and the condition $\iota_X\alpha=0$ will say that the resulting $\beta$ is well-defined (it is not clear to me what you mean by $c_iX_i$, are you missing a sum?).