I read D. Bachman's as well as parts of H. M. Edwards' textbook on forms. Now trying to understand Lie and interior derivatives.
Suppose $\omega$ is 1-form defined on $\mathbb{R}^n$. What is $\omega(X)$? From what I understand,
$$\omega = \sum_{i = 1} ^n \omega_i dx_i$$
$\omega(X) = \omega X$, they are multiplied when $X$ is a scalar expression ($X = \sum_{i = 1} ^n X_i\frac{\partial}{\partial x_i}$)
$\omega(X) = \omega \cdot X = \sum_{i=1} ^n \omega_i X_i$, when $X$ is a 2-vector, 3-vector,...
Is this correct?
What is $\omega(X)=?$ when $\omega$ is a 2-form? A 3-form?
In your answer, if possible please avoid much diff. geom. notation; and possibly use calculus/ vectors concepts
The object $\omega(X)$ does NOT represent multiplication. The idea of a differential form is actually built off ideas from linear algebra and then turned into calculus-based concepts when dealing with manifolds.
Given a vector space $V$, there is a "companion" space we can define here called the dual space and usually denoted by $V^*$. What the dual space consists of are linear functionals which are linear mappings $f:V \to \mathbb{R}$.
Coming back to the idea of a differential form, a $1$-form is a type of smoothly-varying linear functional that takes in a vector field $X$ and outputs a single number at each point. For instance, $\omega$ is a smooth $1$-form on $\mathbb{R}^3$ then at each point of $\mathbb{R}^3$ $\omega$ can take a vector as input and outputs a number. The composite $\omega(X)$ is now basically a function $\mathbb{R}^3 \to \mathbb{R}$. The way this is usually written is that $1$-forms in $\mathbb{R}^3$ are usually written as $dx^1, dx^2, dx^3$ and the corresponding tangent vectors in those directions are $\frac{\partial}{\partial x^1}, \frac{\partial}{\partial x^2}, \frac{\partial}{\partial x^3}$ where we typically have $$ dx^i\left (\frac{\partial}{\partial x^j}\right ) \;\; =\;\; \delta_j^i. $$
It's not proper to think of this strictly of multiplication, and the reason for this is that thinking of this as multiplication doesn't extend well into higher dimensions. Extending this to other forms, a $2$-form is basically a linear functional that takes in two vectors and outputs a number, so if $\eta$ is a $2$-form it will take in two vectors $X$ and $Y$ and output a number $\eta(X,Y)$. A $k$-form will take in $k$-many vectors $\eta(X_1,\ldots, X_k)$. $2$-forms in $\mathbb{R}^3$ are going to look like $dx^1\wedge dx^2, dx^2\wedge dx^3, dx^3\wedge dx^1$ where the operation on vectors is given by
$$ dx^i\wedge dx^j(X,Y) \;\; =\;\; dx^i(X)dx^j(Y) - dx^i(Y)dx^j(X). $$
General Principles for Computing Lie Derivatives
Qualitatively, the Lie derivative of a form is essentially a measure of how much the form is changing as we move it along a curve in a specific tangent direction $X$. Viewing the vector field $X$ as a differential operator, the Lie derivative of a smooth function (i.e. a $0$-form) is given by $$ \mathscr{L}_Xf \;\; =\;\; Xf $$
where $X$ is acting by differentiation. For a 1-form $\omega$, we have that $$ \mathscr{L}_X\omega(Y) \;\; =\;\; X\omega(Y) - \omega([X,Y]) $$
where $X$ acts on the smooth function $\omega(Y)$ by differentiation, and the vector $[X,Y] = XY - YX$. The Lie derivative can also be found via Cartan's Magic Formula given by $$ \mathscr{L}_X\omega \;\; =\;\; d(X\lrcorner \omega) + X\lrcorner d\omega $$
where $d$ represents the exterior derivative and takes $k$-forms to $(k+1)$-forms.