How to prove that the definition of exterior product of differential forms is not ambiguous?

135 Views Asked by At

In page 91 of book A Visual Introduction to Differential Forms and Calculus on Manifolds the exterior product of two differential forms $\alpha \in \bigwedge^{r}(\mathbb{R}^n)$ and $\beta \in \bigwedge^{s}(\mathbb{R}^n)$ is definide by $$ \alpha \wedge \beta = \left( \sum_{I}a_I dx^I\right) \wedge \left( \sum_{J}b_J dx^J\right)=\sum_{I}\sum_{J}a_Ib_J dx^I\wedge dx^J \hspace{2cm}(\ast) $$ where $\alpha =\sum_{I}a_I dx^I$ and $\beta =\sum_{J}b_J dx^J$. Here $\{dx^I\}_{I}$ is the basis of $\bigwedge^{r}(\mathbb{R}^n)$ that we get ( by wedge product $dx^I=dx^{i_1}\wedge \ldots\wedge dx^{i_k}$) from basis $\{dx^1, \ldots, dx^n\}$ of $(\mathbb{R}^n)^\ast$ dual of canonical basis $\{e_1,\ldots,e_n\}$ of $\mathbb{R}^n$. In the same way, $\{dx^J\}_{J}$ is the basis of $\bigwedge^{s}(\mathbb{R}^n)$ that we get ( by wedge product $dx^J=dx^{j_1}\wedge \ldots\wedge dx^{j_\ell}$) from basis $\{dx^1, \ldots, dx^n\}$ of $(\mathbb{R}^n)^\ast$ dual of canonical basis $\{e_1,\ldots,e_n\}$ of $\mathbb{R}^n$.

Do Carmo in his book Differential Forms and Applications makes the same definition of exterior product of differential forms.

Now, let's set any base in $ \mathbb{R}^n$, say $\{y_1,\ldots,y_n\}$. Let $\{dy^I\}_{I}$ is the base of $\bigwedge^{r}(\mathbb{R}^n)$ that we get ( by wedge product $dy^I=dy^{i_1}\wedge \ldots\wedge dy^{i_k}$) from basis $\{dy^1, \ldots, dy^n\}$ of $(\mathbb{R}^n)^\ast$ dual of basis $\{y_1,\ldots,y_n\}$ of $\mathbb{R}^n$. In the same way, let $\{dy^J\}_{J}$ the basis of $\bigwedge^{s}(\mathbb{R}^n)$ that we get ( by wedge product $dy^J=dy^{j_1}\wedge \ldots\wedge dy^{j_\ell}$) from basis $\{dy^1, \ldots, dy^n\}$ of $(\mathbb{R}^n)^\ast$ dual of canonical basis $\{y_1,\ldots,y_n\}$ of $\mathbb{R}^n$.

Since $\{dy^I\}_{I}$ and $\{dy^J\}_{J}$ are basis of $\bigwedge^{r}(\mathbb{R}^n)$ and $\bigwedge^{s}(\mathbb{R}^n)$ respectively we can rewrite the differential forms $\alpha$ and $\beta$ in terms of these basis as $$ \alpha = \sum_{I}g_I dy^I \hspace{1cm} \mbox{and}\hspace{1cm} \beta=\sum_{J} h_{J} dy^J. $$

It is not clear, at least for me, that $$ \alpha\wedge \beta = \left( \sum_{I}g_I dy^I \right) \wedge \left( \sum_{J} h_{J} dy^J \right)= \sum_{I}\sum_{J} g_Ih_J dy^I \wedge dy^J $$ will be the same differential form that we get in $(\ast)$.

Question. How to prove that the definition of exterior product of differential forms is not ambiguous? That is, how to prove that $$ \left( \sum_{I}a_I dx^I\right) \wedge \left( \sum_{J}b_J dx^J\right) = \left( \sum_{I}g_I dy^I \right) \wedge \left( \sum_{J} h_{J} dy^J\right) \mbox{ ? } $$

Honestly, I have no idea how to proceed with this problem.

2

There are 2 best solutions below

0
On BEST ANSWER

I do not know if that is the most elegant and direct answer to the question. I would appreciate suggestions for improving it as well as answers that explore a different point of view.

Lema. Let $\omega\in \bigwedge^{r}(\mathbb{R}^n)$. Supose that

  • $\sum_{K}a_K du^K$ is the expression of $\omega$ in the basis $\{du^K\}_{K}$ of $\bigwedge^{r}(\mathbb{R}^n)$ that we get ( by wedge product $du^K=du^{k_1}\wedge \ldots\wedge du^{k_r}$) from basis $\{du^1, \ldots, du^n\}$ of $(\mathbb{R}^n)^\ast$ dual of basis $\{u_1,\ldots,u_n\}$ of $\mathbb{R}^n$.

  • $\sum_{L}b_L dv^L$ is the expression of $\omega$ in the basis $\{dv^L\}_{L}$ of $\bigwedge^{r}(\mathbb{R}^n)$ that we get ( by wedge product $dv^L=dv^{\ell_1}\wedge \ldots\wedge dv^{\ell_r}$) from basis $\{dv^1, \ldots, dv^n\}$ of $(\mathbb{R}^n)^\ast$ dual of canonical basis $\{v_1,\ldots,v_n\}$ of $\mathbb{R}^n$.

  • $c=(c_{k\ell})_{m\times m}$ is the basis change matrix that expresses the basis $\{u_1,\ldots,u_n\}$ in terms of the basis $\{v_1,\ldots,v_n\}$ by the following equations $u_k=\sum_{\ell=1}^{m}c_{k\ell}v_\ell$.

Then $$ a_K=\sum_{I} b_L \det(c_{k_q\ell_p})_{r\times r} $$ with $(c_{k_q\ell_p})_{r\times r}$ the matrix $r \times r$ whose elements are the elenents of matrix $c=(c_{k\ell})$ whose indices are such that $k_q\in K=\{k_1<\ldots<k_r\}$ and $\ell_p\in L=\{\ell_1<\ldots<j_r\}$.

Proof. Fix $K=\{k_1<\ldots<k_r\}$ and set $u_{k_1},\ldots, u_{k_p},\ldots u_{k_r}\in\{u_1,\ldots, u_n\}$. We have $$ \omega(u_{k_1},\ldots, u_{k_p},\ldots u_{k_r}) = \sum_{I}a_Idu^I(u_{k_1},\ldots, u_{k_p},\ldots u_{k_r}) = a_K $$ On the other hand, \begin{align} \omega(u_{k_1},\ldots, u_{k_p},\ldots u_{k_r}) =& \sum_{L}b_Ldv^I(u_{k_1},\ldots, u_{k_p},\ldots u_{k_r}) \\ =& \sum_{\ell_1<\ldots<\ell_q<\ldots <\ell_r} b_{\{\ell_1<\ldots<\ell_q<\ldots <\ell_r\}} dv^{\ell_1}\wedge\ldots\wedge dv^{\ell_q}\wedge \ldots \wedge du^{\ell_r} (u_{k_1},\ldots, u_{k_p},\ldots u_{k_r}) \\ =& \sum_{\ell_1<\ldots<\ell_q<\ldots <\ell_r} b_{\{\ell_1<\ldots<\ell_q<\ldots <\ell_r\}} \det( dv^{\ell_q}\cdot u_{k_p})_{r\times r} \\ =& \sum_{\ell_1<\ldots<\ell_q<\ldots <\ell_r} b_{\{\ell_1<\ldots<\ell_q<\ldots <\ell_r\}} \det\left( dv^{\ell_q}\cdot \left( \sum_{\ell=1}^{n}c_{k_p\ell}v_\ell \right)\right)_{r\times r} \\ =& \sum_{\ell_1<\ldots<\ell_q<\ldots <\ell_r} b_{\{\ell_1<\ldots<\ell_q<\ldots <\ell_r\}} \det\left( c_{k_q\ell_p}\cdot dv^{\ell_q} v_{\ell_q}\right)_{r\times r} \\ =& \sum_{\ell_1<\ldots<\ell_q<\ldots <\ell_r} b_{\{\ell_1<\ldots<\ell_q<\ldots <\ell_r\}} \det\left( c_{k_q\ell_p}\cdot 1 \right)_{r\times r} \\ =& \sum_{L}b_{L} \det\left( c_{k_q\ell_p} \right)_{r\times r} \\ \end{align} Therefore, it follows that $$ a_K=\sum_{I} b_L \det(c_{k_q\ell_p})_{r\times r}. $$


Now the demonstration of the question that this post refers to. Since $$ \left( \sum_{I}a_I dx^I\right) \wedge \left( \sum_{J}b_J dx^J\right)\in \bigwedge^{r+s}(\mathbb{R}^n) $$ there are numbers $a_{K}$, with $K$ running through all ordered sets $\{k_1< \ldots< k_{r+s}\}\subset \{1,\ldots, n\}$, such that $$ \left( \sum_{I}a_I dx^I\right) \wedge \left( \sum_{J}b_J dx^J\right) = \sum_{K}a_K dx^K $$ because $\{dx^K\}_{K}$ is basis of $\bigwedge^{r+s}(\mathbb{R}^n)$.For the same reasons given above there are numbers $b_{L}$, with $L$ running through all ordered sets $\{\ell_1< \ldots< \ell_{r+s}\}\subset \{1,\ldots, n\}$, such that $$ \left( \sum_{I}g_I dy^I \right) \wedge \left( \sum_{J} h_{J} dy^J\right) = \sum_{L}b_L dy^L. $$ Fix $K=\{k_1<\ldots< k_{r+s} \}$ and $x_{k_1},\ldots, x_{k_q}, \ldots, x_{r+s}$. It is easy to see that $$ \left( \sum_{I}a_I dx^I\right) \wedge \left( \sum_{J}b_J dx^J\right) (x_{k_1},\ldots, x_{k_q}, \ldots, x_{r+s}) = \sum_{G}a_G dx^G (x_{k_1},\ldots, x_{k_q}, \ldots, x_{k_{r+s}}) = a_K $$ Let $c=(c_{k\ell})_{m\times m}$ is the basis change matrix that expresses the base $\{x_1,\ldots,x_n\}$ in terms of the base $\{y_1,\ldots,y_n\}$ by the following equations $x_k=\sum_{\ell=1}^{m}c_{k\ell}y_\ell$. By analogous calculations to what we did in the above demonstration we have \begin{align} \left( \sum_{I}g_I dy^I \right) \wedge \left( \sum_{J} h_{J} dy^J\right) (x_{k_1},\ldots, x_{k_q}, \ldots, x_{k_{r+s}}) =& \sum_{L}b_L dy^L(x_{k_1},\ldots, x_{k_q}, \ldots, x_{k_{r+s}}) \\ =& \sum_{I} b_L \det(c_{k_q\ell_p})_{r\times r} \end{align} By lema, we have $a_K=\sum_{I} b_L \det(c_{k_q\ell_p})_{r\times r}$, and it follows that $$ \left( \sum_{I}a_I dx^I\right) \wedge \left( \sum_{J}b_J dx^J\right) \hspace{1cm} \mbox{ and } \hspace{1cm} \left(\sum_{I}g_I dy^I \right) \wedge \left( \sum_{J} h_{J} dy^J\right) $$ are equals in all $(r+s)$-tuple of vectors $x_{k_1},\ldots, x_{k_q}, \ldots, x_{k_{r+s}}$ in basis $\{x_1,\ldots,x_n\}$. By linearity we have that equality holds for all $(r+s)$-tuple of vectors $w_{k_1},\ldots, w_{k_q}, \ldots, w_{k_{r+s}}$ in $\mathbb{R}^n$.

0
On

Here's a very simple argument. An $r$-vector can be thought of (or perhaps, it was even defined to you) as a formal sum $$\sum_i a_i(v_1 \wedge \ldots \wedge v_r),$$ subject only to the constraints that make the $r$-fold wedge product a multilinear alternating map. These are:

  • The function is linear in every argument.
  • Swapping arguments negates the value.

So now you want to prove that the operation $$\left(\sum_i a_i(u_1 \wedge \ldots \wedge u_r)\right)\wedge \left(\sum_j b_j(v_1 \wedge \ldots \wedge v_s)\right)=\sum_{i,j}a_ib_j(u_1\wedge\ldots \wedge u_r\wedge v_1\wedge\ldots\wedge v_s),$$ is well-defined, meaning that the same arguments always give you the same outputs. Consider a different representation of the arguments. There must exist some chain of equalities, using only the two aforementioned laws, between the original and new representations.

But then these equalities also apply to the right hand side! For instance, if we substitute $$a_i(u_1\wedge u_2\wedge\ldots\wedge u_r)\quad\leftrightarrow\quad-a_i(u_2\wedge u_1\wedge\ldots\wedge u_r)$$ on the LHS, we can also substitute $$a_ib_j(u_1\wedge u_2\wedge\ldots\wedge u_r \wedge v_1 \wedge\ldots\wedge v_s)\quad\leftrightarrow\quad-a_ib_j(u_2\wedge u_1\wedge\ldots\wedge u_r \wedge v_1 \wedge\ldots\wedge v_s)$$ on the RHS. Exactly the same applies for linearity (though it's more tedious to write down).

After following this chain of equalities, we'll find that indeed, different representations of the same values give the same result. In particular, the wedge product works the same no matter the basis we choose.