Here is my proof that the elementary $k$-forms form a basis for $A^k{(\mathbb{R}^n)}$. Is this correct?

1.3k Views Asked by At

Question:

Here is my proof that the elementary $k$-forms form a basis for $A^k{(\mathbb{R}^n)}$. Is my proof below correct?

Context:

I have searched for an answer in a text book [1]. What I have found is that the authors give a specific proof for the specific case of 2-forms, and then state that "An analogous but messier [emphasis added] computation would show that for any $k$-form in and $\mathbb{R}^n$ the form is determined by its values on sequences $\vec{e}_{i,1},\ldots,\vec{e}_{i,k} $..."

This doesn’t meet my needs:

(1) I need to be certain that I understand this proof so that I am confident as I continue my independent study. Ancillary to this, if the solution for the general case can be written neatly, then this augments confidence in the successful comprehension of the definitions, theorems, language, etc. in the remaining material.

(2) I find that the proof of this theorem given in [1] somewhat meanders from the definitional statements; and this too does not meet my needs. I learn most efficiently and effectively when proofs tack more closely to the definitions. So that is what I've attempted to do.

(3) While in their proof the authors of [1] only consider that the arguments of $\varphi$ are given in strictly increasing order. This does not meet my needs. In my proof I consider the cases where the argument are not necessarily given in strictly increasing order.

Statement of the Problem:

Given the following definition

Definition ($A^k{(\mathbb{R}^n)}$) The space of $k$-forms in $\mathbb{R}^n$ is denoted $A^k{(\mathbb{R}^n)}$,

prove the following theorem.

Theorem (a) The elementary $k$-forms form a basis for $A^k{(\mathbb{R}^n)}$: every multilinear and antisymmetric function $\varphi$ of $k$ vectors in $\mathbb{R}^n$ can be uniquely written \begin{equation} \varphi = \sum_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}}.\end{equation} (b) The coefficients $a_{i_1, \ldots, i_k}$ are given by $$a_{i_1, \ldots, i_k} = \varphi{(\vec{e}_{i,1},\ldots,\vec{e}_{i,k})}.$$

Proof:

This proof is divided in three parts. Parts A and B together address part (a) of the theorem; while Part C addresses part (b).

Part A. Pursuant to the definition of linear independence (e.g., cf. [1]), a set of $k$-forms are linearly independent if there is at most one way of writing the multilinear antisymetric function $\varphi$ as a linear combination of $k$-forms; that is, if $$ \varphi = \sum_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}} = \sum_{1\leq i_1 < \cdots < i_k \leq n}{b_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}}$$ implies $a_{i_1\cdots i_k} = b_{i_1\cdots i_k}.$

We proceed to evaluate $\varphi$ on any $k$ standard basis vectors listed in increasing order.
\begin{align} \varphi{(\vec{e_{j_1}}, \ldots, \vec{e_{j_k}})} & = \sum_{1\leq i_1 < \cdots < i_k \leq n}{ {a_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}} } {(\vec{e}_{j_1}, \ldots, \vec{e}_{j_k})} \\ & = \sum_{1\leq i_1 < \cdots < i_k \leq n}{ {a_{i_1\cdots i_k} } } \det{ \begin{bmatrix} e_{j_1,i_1} & \cdots & e_{j_k,i_k} \\ \vdots & \ddots & \vdots \\ e_{j_k,i_1} & \cdots & e_{j_k,i_k} \end{bmatrix} }\quad \textrm{Eq. 1} \\ & = \begin{cases} a_{i_{1}, \ldots, i_{k}} & \textrm{for } i_{1} = j_{1}, \ldots, i_{k} = j_{k} \\ 0 & \textrm{otherwise.} \end{cases} \quad \textrm{Eq. 2} \end{align} Similarly, we find \begin{align} \varphi{(\vec{e_{j_1}}, \ldots, \vec{e_{j_k}})} & = \begin{cases} b_{i_{1}, \ldots, i_{k}} & \textrm{for } i_{1} = j_{1}, \ldots, i_{k} = j_{k} \\ 0 & \textrm{otherwise.} \end{cases} \end{align} Allowing that $i_{1} = j_{1}, \ldots, i_{k} = j_{k} $, \begin{align} \varphi{(\vec{e_{i_1}}, \ldots, \vec{e_{i_k}})} & = a_{i_{1}, \ldots, i_{k}} \textrm{ and also } \\ \varphi{(\vec{e_{i_1}}, \ldots, \vec{e_{i_k}})} & = b_{i_{1}, \ldots, i_{k}}. \end{align}

By the transitive relation [2], whenever $$\varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})} = a_{i_1, \ldots, i_k} \quad \textrm{Eq. 3} $$ and $$\varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})} = b_{i_1, \ldots, i_k}, $$ then also $ a_{i_1, \ldots, i_k} = b_{i_1, \ldots, i_k} $. So, we find that the $k$-forms are linearly independent.

Part B. Pursuant to the definition of span (e.g., cf. [1]), the span of the elementary $k$-forms $dx_{i_1}\wedge \cdots\wedge dx_{i_k}$ is the set of linear combinations $\sum\limits_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k} \, dx_{i_1}\wedge \cdots\wedge dx_{i_k} }$.

We ask: Is each and any multilinear and antisymetric function $\varphi$ of $k$ vectors in $\mathbb{R}^n$ in the span of $A^k {(\mathbb{R}^n) }$?

We start by proposing that $$\varphi = \sum\limits_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k} \, dx_{i_1}\wedge \cdots\wedge dx_{i_k} }.$$ Following through as done above, we find that for any standard $k$ basis vectors listed in increasing order $$\varphi{(\vec{e}_{j_1},\ldots, \vec{e}_{j_k})} = a_{j_{1}, \ldots, j_{k}} .$$ Next we proceed to evaluate $\varphi$ on any $k$ standard basis vectors -- not necessarily being listed in increasing order. We note from the definition of the determinant of a square matrix $A =[\vec{a}_i, \ldots, \vec{a}_n]$ (e.g., cf. [1]) that exchanging any two arguments changes its sign. Therefore, with respect to Eqs. 1 and 2, we write \begin{equation} \varphi{(\vec{e}_{l_1},\ldots, \vec{e}_{l_k})} = (-1)^p\,a_{j_{1}, \ldots, j_{k}}, \end{equation}
where $p$ is a non-negative integer giving the required number of exchanges of any two of the arguments of $\varphi$ such that the arguments are ultimately given in strictly increasing order. Irrespective of the actual order of the arguments of $\varphi$, we find that $\varphi$ can be written as a linear combination of elementary $k$-forms. So, we find that the $k$-forms span $A^k{(\mathbb{R}^n)}$.

Part C. Multiplying both sides of Eq. 3 by -1 and next adding $(\varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})} + a_{i_1, \ldots, i_k})$ to both sides, we find that $$a_{i_1, \ldots, i_k} = \varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})}. $$

Q.E.D.

Bibliography:

[1] `Vector Calculus, Linear Algebra, and Differential Forms' by Hubbard and Hubbard, Second Edition, 2002, pp. 193, 195, 469, 562-3.

[2] https://en.wikipedia.org/wiki/Transitive_relation.

1

There are 1 best solutions below

0
On BEST ANSWER

First of all, I would like to point out that Munkres provides an elegant proof in his book Analysis on Manifolds. He first shows a lemma that says: for alternating $k$-tensors $f$ and $g$, if $f(a_{i_1},\dots,a_{i_k})=g(a_{i_1},\dots,a_{i_k})$ for every ascending $k$-tuple $(i_1,\dots,i_k)$, then $f\equiv g$.

My only criticism is the end of part $B$: compared to the rest of your proof (especially part $C$ - how is that necessary?), your argument for the sign of $\phi$ is almost hand-wavy. I reference Munkres again; he proves basic results (prior to the proof) about permuting the entries of a $k$-tensor, including: if $f$ is alternating and $\sigma$ has odd parity, then $f^\sigma=-f$.

Apart from that (which is just my opinion at the end of the day), I think you have a typo here: "which implies $a_{i_1\cdots i_k} = b_{i_1\cdots i_k}, \ldots, a_k = b_k$."