I am trying to solve the following problem:
Let $M$ and $N$ be smooth manifolds, and suppose $P:M\to N$ is a surjective smooth submersion with connected fibers. Show that given $\alpha \in \Omega^k(M)$ there exists $\beta\in \Omega^k(N)$ such that $\alpha=P^*\beta$ if and only if $\iota_X \alpha=0$ and $L_X \alpha =0$ for every $X \in \ker(dP)$.
Solution
Let $\alpha=P^*\beta$, so by definition of pullback we have that: \begin{align} \alpha_x(X)=\beta_{P(x)}(dP(X)) \end{align} for $x \in M$ and $X\in T_xM$\ If we take $X$ such that $X\in \ker(dP)$, then $\iota_X \alpha = 0$.
To prove that $L_X \alpha =0$, we will use Cartan's magic formula: \begin{align} L_X\alpha= \iota_X d\alpha +d(\iota_X \alpha) \end{align} We already have that the second term is zero for $X\in ker(dP)$.\ Using the fact that $d(P^*\beta)=P^*d\beta$, and the definition of pullback we get that the first term in Cartan's formula becomes $d\beta_{P(x)}(dP(X))=0$ if $X\in \ker(dP)$.
Thus we have that $L_X \alpha =0$ for all $X\in \ker(dP)$.
Conversely, let us define the pullback in terms of local coordinates with $P:M\to N$ as $P(x_1,...,x_n,...,x_m)=(x_1,...,x_n)$.
Given $\alpha \in \Omega^k(M)$, we can write it as \begin{align} \alpha = \sum_{1}^{m}f_I(x_1,...,x_m) dx_I \end{align} for $I=(x_1,...,x_m)$. By hypothesis, $\iota_X \alpha =0$ for all $X\in \ker(dP)$. By the way we defined the pullback, the expression $X\in \ker(dP)$ means that $X=c_iX_i$, where $X_i=0$ for $n<i\leq m$.
The second condition $L_X \alpha =0$, with cartan's formula and the first condition means that $\iota_X d\alpha=0$. Replacing we get: \begin{align} d\alpha = \sum_{I,j}^{m}\dfrac{f_I(x_1,...,x_n)}{\partial(x_i)}dx_j\wedge dx_I \end{align} Evaluating on $X\in \ker(dP)$, we get that the change of $f_I$ in the direction of $x_i$ is zero for $n<i\leq m$. Then our conditions on alpha means that $f_I(x_1, ...,x_m)= f_I(x_1,...,x_n,0,...,0)$ for all $X\in \ker(dP)$.
Let $g_J= f_I\circ P$, so we have that:
\begin{align}
P^*\alpha = \sum_{I}^{m} f_I\circ P(x_1,...,x_m) dx_I=\sum_{J}^{n} g_J(x_1,...,x_n) dx_J=\beta
\end{align}
Thus completing the proof.
I want to check this proof as I am not confident on the second part. I think I am missing something on how the $dx_i$ with $n<i\leq m$ dissappear. Also I think I am messing something up with the indices but I can't say what exactly. My professor explained me the main idea of the second part of the proof. I think I understand the general idea, but not enough to put it in rigorous terms.
It doesn't look so good to me. We don't necessarily have $k=1$. Let's organize ourselves: