Finding the antiderivatives for the differential form $\alpha = x_2 d x_1 \wedge d x_2$

659 Views Asked by At

Consider the differential form

$$\alpha = x_2 d x_1 \wedge d x_2$$

on $\mathbb{R}^3$. I first want to find a $1$-form $\beta \in \Omega^1(\mathbb{R}^3)$ that is an antiderivative for $\alpha$, i.e. we have $\alpha = d \beta$ (where $d$ denotes the exterior derivative). Next, I want to find "more" antiderivatives for $\alpha$; if possible, I want to construct a general formula for them to get all of them.

I must admit that I don't really know how to get started with this task. I think that $\beta$ must be of the shape $\beta = a d x_i$ where $a: \mathbb{R}^3 \to \mathbb{R}$ is a coefficient function and $i \in \{1, 2\}$, so that $d \beta = d a \wedge d x_i = x_2 d x_1 \wedge d x_2 = \alpha$. But how can I find out what $a$ must actually look like; and once I got $\beta$ what "choice" do I have to modify $\beta$ in order to get different antiderivatives? Can I maybe add an exact form to $\beta$ or something like that, so that the exact form "falls away" in the exterior derivative?

2

There are 2 best solutions below

0
On BEST ANSWER

If $\beta$ and $\beta'$ are one-forms such that $d\beta = d\beta' = \alpha$, then $\beta' - \beta$ is closed, i.e. $d(\beta' - \beta) = 0$. It then follows from the Poincaré Lemma that $\beta' - \beta$ is exact, i.e. $\beta' - \beta = df$ for some function $f$. So we see that if $\beta$ and $\beta'$ satisfy $d\beta = d\beta' = \alpha$, then $\beta' = \beta + df$ for some function $f$. Conversely, if $\beta$ satisfies $d\beta = \alpha$, then for any function $f$, we see that

$$d(\beta + df) = d\beta + d(df) = \alpha + 0 = \alpha.$$

So given a one-form $\beta$ with $d\beta = \alpha$, the set of one-forms which have exterior derivative $\alpha$ is $\{\beta + df \mid f \in C^{\infty}(\mathbb{R}^3)\}$.

In order to find a single one-form $\beta$ with $d\beta = \alpha$ we can use a direct approach. If $\beta$ is a one-form on $\mathbb{R}^3$, then $\beta = a_1\, dx^1 + a_2\, dx^2 + a_3\, dx^3$ for some functions $a_1, a_2, a_3 : \mathbb{R}^3 \to \mathbb{R}$. Then

$$d\beta = \left(\frac{\partial a_2}{\partial x^1} - \frac{\partial a_1}{\partial x^2}\right)dx^1\wedge dx^2 + \left(\frac{\partial a_3}{\partial x^1} - \frac{\partial a_1}{\partial x^3}\right)dx^1\wedge dx^3 + \left(\frac{\partial a_3}{\partial x^2} - \frac{\partial a_2}{\partial x^3}\right)dx^2\wedge dx^3.$$

So if $d\beta = \alpha$, we see that

\begin{align*} \frac{\partial a_2}{\partial x^1} - \frac{\partial a_1}{\partial x^2} &= x_2\\ \frac{\partial a_3}{\partial x^1} - \frac{\partial a_1}{\partial x^3} &= 0\\ \frac{\partial a_3}{\partial x^2} - \frac{\partial a_2}{\partial x^3} &= 0. \end{align*}

We just need to find three functions $a_1, a_2, a_3$ which satify these equations. There is a lot of freedom here. Let's set $a_1 = 0$, so we can try to satisfy the first equation which reduces to

$$\frac{\partial a_2}{\partial x^1} = x_2.$$

The simplest example of a function $a_2$ which I can think of which satisfies this equation is $a_2 = x_1x_2$. So $a_1 = 0$, $a_2 = x_1x_2$ solves the first equation. With these choices of $a_1$ and $a_2$ the second and third equations reduce to

$$\frac{\partial a_3}{\partial x^1} = \frac{\partial a_3}{\partial x^2} = 0.$$

The simplest function $a_3$ which I can think of which satisfies these equations is $a_3 = 0$. So we see that if $a_1 = 0$, $a_2 = x_1x_2$, and $a_3 = 0$, then all three equations are satisfied. Therefore, the one-form $\beta = x_1x_2\, dx^2$ satisfies $d\beta = \alpha$.

0
On

For a closed $k$-form defined in a star-shaped region relative to the origin, there is an algorithm for finding an antiderivative that I describe here. Applying this to $\alpha$:

Step 1: substitute $x_k\to tx_k$ and $dx^k\to t\,dx^k+x_k\,dt$.$$\begin{align}x_2\,dx^1\land dx^2 &\to tx_2(t\,dx^1+x_1\,dt)\land(t\,dx^2+x_2\,dt)\\ &= tx_2(t^2\,dx^1\land dx^2+tx_2\,dx^1\land dt+tx_1\,dt\land dx^2)\end{align}$$

Step 2: Drop all terms that don’t involve $dt$ and move $dt$ to the left $$t^2x_2(x_1\,dt\land dx^2-x_2\,dt\land dx^1+\dots)$$

Step 3: Treat as an ordinary integral w/r $t$ and integrate from $0$ to $1$ $$\int_0^1t^2(x_1x_2\,dx^2-x_2^2\,dx^1)\,dt = \frac13(x_1x_2\,dx^2-x_2^2\,dx^1).$$

This is not the same as the antiderivative found by Michael Albanese in his answer, but the difference is $\frac13(x_2^2\,dx^1+2x_1x_2\,dx^2)=d\left(\frac13 x_1x_2^2\right)$, which is a “constant” of integration that vanishes when differentiated.