I'm reading Differentiable Manifolds by Nigel Hitchin, that is, his class notes for an Oxford course freely available here. In particular, I'm trying to understand the interior product on manifolds, and how he works out his example.
So the example given is of a basic p-form $\alpha = dx_1 \wedge dx_2 \wedge \ldots \wedge dx_p$ and a vector field $X = \sum_{i}a_i \frac{\partial}{\partial x_i}$. It is then stated that the interior product is given by $$i_{X}\alpha = a_1 dx_2 \wedge \ldots \wedge dx_p - a_2dx_1 \wedge dx_3 \wedge \ldots \wedge dx_p + \ldots$$
Can anyone help me understand how one comes to this conclusion? In the proposition prior to the example, given a vector field $X$ on a manifold $M$, the interior product is characterized as a linear map $i_X: \Omega^p(M) \to \Omega^{p-1}(M)$ s.t. $i_Xdf = X(f)$ and $i_X(\alpha \wedge \beta) = i_X \alpha \wedge \beta + (-1)^p \alpha \wedge i_X \beta$ for $\alpha \in \Omega^p(M)$.
Based on this definition I was looking to find a $p-1$-form $\beta$ such that $$X(\beta) = a_1 dx_2 \wedge \ldots \wedge dx_p - a_2dx_1 \wedge dx_3 \wedge \ldots \wedge dx_p + \ldots,$$ so kind of an "antiderivative".
To me it looks like $$\beta = x_1 dx_2 \wedge dx_3 \wedge \ldots dx_p + x_2 dx_1 \wedge dx_3 \wedge \ldots \wedge dx_p + \ldots + x_p dx_1 \wedge \ldots \wedge dx_{p-1}$$
would work, but then $d \beta = p dx_1 \wedge \ldots \wedge dx_p = p \alpha$, isn't it? So if that was my $\beta$, then I wouldn't get the connection between $df$ and $f$ in the first property of $i_X$.
Anyway, help here would be greatly appreciated, since I really can't seem to wrap my head around this one.
edit: OK, so I've been struggling with this for hours, but minutes after posting I think I'm starting to get it. I think that in order to obtain the expression I can just split $\alpha = \alpha_0 \wedge \alpha_1$, where $\alpha_0 = dx_1$ and $\alpha_1 = dx_2 \wedge \ldots dx_p$. Then I can apply the second property of $i_X$ along with the fact that $i_X(dx_j) = X(x_j) = a_j$. Proceeding inductively I would indeed then get the desired result. Is this the correct way of going about it? Or is there an easier and more straightforward way of doing it?
As I pointed out in the comments, the approach you mention in your edit is the correct one.
Using the property $i_X(\alpha\wedge\beta) = (i_X\alpha)\wedge\beta + (-1)^p\alpha\wedge(i_X\beta)$, computing $i_X(dx^1\wedge\dots\wedge dx^p)$ reduces to computing $i_X(dx^j)$ which can be evaluated as follows:
$$i_X(dx^j) = dx^j(X) = X(x^j) = \sum_ia_i\frac{\partial}{\partial x_i}(x_j) = \sum_ia_i\frac{\partial x_j}{\partial x_i} = \sum_i a_i\delta_{ij} = a_j.$$
Alternatively,
$$i_X(dx^j) = dx^j(X) = dx^j\left(\sum_ia_i\frac{\partial}{\partial x_i}\right) = \sum_ia_idx^j\left(\frac{\partial}{\partial x_i}\right) = \sum_ia_i\delta_{ij} = a_j.$$