Why does the curl of a function provide this particular amount of information?

267 Views Asked by At

In a classical electrodynamics textbook (Griffiths), it is mentioned that even though the electric field function, $E:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$, is a (3D) vector valued function, the amount of information needed to fully describe it is equivalent to the amount needed to describe a scalar valued function, and this is because it is the gradient of an electric potential function, $U:\mathbb{R}^{3}\rightarrow \mathbb{R}$ and so is completely determined by it. The author goes on to explains that this is so because $E$ has the property that its curl is zero evevrywhere, and this is what restricts the freedom in determining $E$ (and what enables it to be the gradient of a scalar function in the first place).

This is all fine (and fun), but I find myself unable to answer this question: why does the imposition of zero curl provide an amount of information exactly equivalent to two scalar-valued functions (no more, no less)?

An example of the sort of reasoning that leads me to other conclusions: since zero curl is equivalent to equating three pairs of partial derivatives ($\partial E_{x}/\partial y = \partial E_{y}/\partial x$ and so on), leaving 6 out of 9 partial derivatives "free", it seems that "one third of the amount of information" required to describe $E$ has been taken up, as opposed "two thirds"...
I am of course assuming here that the first order partial derivatives determine the behavior of the function (perhaps up to a constant as in the 1D case?), and that these partial derivatives are independent of each other and thus provide equal amounts of information. Is either of these assumptions wrong? Is my whole reasoning off?

Any insight into this question and a possible answer would be appreciated.

*The question is, of course, a general one about vector-valued functions, with $E$ just being a particular case.

**I have not mentioned all the obvious assumption of smoothness necessary for everything to be defined..

***I realise that I'm playing fast-and-loose with the word "information", and that my whole question is very un-formal. Any reference to an area of mathematics which may perhaps put such intuitions on rigorous footing are more then welcome.

Edit: I am not asking why (or when) a vector field having zero curl is equivalent to it being a gradient of some scalar field. I am asking about the amount of information we get about a vector field when we determine its curl.

3

There are 3 best solutions below

2
On BEST ANSWER

If the curl were an arbitrary vector field, it would “contain as much information as the field itself”. (I'm using scare quotes because this is all on the same hand-waving level of rigorosity that your arguments address.)

However, the curl satisfies $\nabla\cdot(\nabla\times E)=0$. Thus, we have one scalar constraint on the curl, which reduces the number of free scalar functions in the curl from $3$ to $2$, so specifying the curl reduces the number of free scalar functions in $E$ from $3$ by $2$ to $1$.

Your considerations about the partial derivatives don't work out because the partial derivatives aren't independent. If they were, that would replace one scalar function by three scalar functions. The partial derivatives are subject to $\frac{\partial^2}{\partial x\partial y}=\frac{\partial^2}{\partial y\partial x}$, and likewise for the other two pairs of coordinates.

7
On

The correct context for these kinds of question is de Rham cohomology. This is the natural cohomology associated with differential forms and these, in turn, are the natural generalisations of the integrands we find in 3d vector analysis to manifolds of arbitrary curvature and dimension. Unfortunately, this is not yet a part of the standard toolkit of every physicist in the way that vector analysis is.

In this context, what you are asking is when we are given a differential form $\alpha$ when is there a differential form $\beta$ such that $d\beta=\alpha$. Such a differential form is called exact.

Say this is true, then since it is always true that $d^2=0$ (which is the correct generalisation of vector analysis identities such as $curl(grad f) = 0)$ then we must have $d\alpha=d^2\beta=0$ and hence it must hold that $d\alpha=0$. Now such differential forms are closed. So our first condition is that for differential form to be exact, it is neccessary that it is closed. However, this condition is not sufficient. Further conditions rely upon the global topology of the space in question. If this space is like a Euclidean space then it is possible. Now, by definition, space is taken to be always locally Euclidean. Hence, locally speaking, a closed form is always exact.

0
On

What you are asking about is known as Helmholtz decomposition.

Probably the easiest place to see the decomposition is in Fourier space where the condition of zero curl means that $\vec k\times \vec E[\vec k]=0$ which directly implies that $\vec E = f(\vec k)\vec k$ as a statement that $\vec E$ points in the $\hat k$ direction. All of the standard requirements for a Fourier transform to exist then also need to be applying to the electric field.

I suspect that the real tension is coming from a question more like this: given a (co)vector field $F_\alpha$ it is not in principle hard to form a Jacobian $\nabla_\alpha F_\beta$ and this clearly can be decomposed into a symmetric and antisymmetric part. The question is, it is clear that $\nabla_\alpha \phi$ is sufficient to generate a symmetric matrix, but how could we show it is necessary (with the various "assuming the field is nice" that we always have in physics)? In other words how can we prove that a nice symmetric Jacobian is a Hessian?

We might just appeal to the Fourier space again but when it's phrased like that I think there might be another clarifying insight. A real symmetric matrix is a special case of a Hermitian matrix and those are diagonalizable, but a diagonal matrix could just be integrated directly into a solution as the sum of all the double integrals of its diagonal components.