Explaining an argument of Ahlfors without the use of differentials

85 Views Asked by At

I struggle to understand arguments that make use of $\text{d}x$-like notation (with the exception of rigorously defined differential forms, yet everything below is elementary enough so that forms are not needed). Ahlfors gives such an argument in his book Complex Analysis to prove the Result below, and I was wondering if someone could explain Ahlfors' argument without the use of differentials.


Theorem 1: The line integral $\int p \text{d}x + q \text{d}y$, defined in $\Omega$, depends only on the end points of $\gamma$ if und only if there exists a function $U(x,y)$ in $\Omega$ with the partial derivatives $\partial U/\partial x = p$, $\partial U/\partial y$.

After proving this result, Ahlfors writes

It is customary to write $\text{d}U = (\partial U/\partial x)\text{d}x + (\partial U/\partial y)\text{d}y$ and to say that an expression $p \text{d}x + q \text{d}y$ which can be written in this form is an exact differential. Thus an integral depends only on the end points if and only if the integrand is an exact differential. Observe that $p$, $q$ and $U$ can be either real or complex. The function $U$, if it exists, is uniquely determined up to an additive constant, for if two functions have the same partial derivatives their difference must be constant. When is $f(z) \text{d}z = f(z) \text{d}x + if(z) \text{d}y$ an exact differential? According to the definition there must exist a function $F(z)$ in $\Omega$ with the partial derivatives $$\frac{\partial F(z)}{\partial x} = f(z), \ \ \ \ \frac{\partial F(z)}{\partial y} = if(z).$$ If this is so, $F(z)$ fulfills the Cauchy-Riemann equation $$\frac{\partial F}{\partial x} = -i\frac{\partial F}{\partial y};$$ since $f(z)$ is by assumption continuous (otherwise $\int_\gamma f \text{d}z$ would not be defined) $F(z)$ is analytic with the derivative $f(z)$ (Chap. 2, Sec. 1.2).

Result: The integral $\int_\gamma f \text{d}z$, with continuous $f$, depends only on the end points of $\gamma$ if and only if $f$ is the derivative of an analytic function in $\Omega$.


1st Question: is Theorem 1 an instance of the following?

Gradient Theorem: $F$ is a path-independent vector field if and only if $F$ is the gradient of some scalar field $f$, in which case $$\int_\gamma F(\textbf{r})\cdot\text{d}\textbf{r} = f(\textbf{p}) - f(\textbf{q})$$ where $\gamma$ begins at $\textbf{p}$ and ends at $\textbf{q}$.

2nd Question: can the Result be proven without use of $\text{d}x$ notation? If so, seeing such an argument would really help me undedrstand these paragraphs from the book.