Jacobian and changing coordinates proof

1k Views Asked by At

Consider a transformation between the coordinates given by $x^{a}$ to another system given by coordinates $x^{'a}$. So a transformation of the type: $x^{a} \rightarrow x^{'a}$.

With that I can construct the Jacobian matrix:

$\bigg[ \frac{\partial x^{'a}}{\partial x^{b}} \bigg]$

The determinant of this matrix is called the Jacobian determinant, hereafter J. I want to prove that:

$dx^{'1}dx^{'2}...dx^{'N} = Jdx^{1}dx^{2}...dx^{N}$

where primed system is cartesian and the unprimed one is a general set of coordinates. How can I do that?

I remembered that in transformation of coordinate is valid to write (using the summation convention) that:

$dx^{'a} = \frac{\partial x^{'a}}{\partial x^{b}}dx^{b}$

And tried to apply that to proof that I need, but, I was not abble to go anywhere because of lot of terms shows up. Actually, in the case we considerer just 2 dimensions I ended up with something like that:

$dx^{'1}dx^{'2} = \bigg( \frac{\partial x^{'1}}{\partial x^{1}} \frac{\partial x^{'2}}{\partial x^{1}} \bigg)(dx^{1})^{2}+ \bigg( \frac{\partial x^{'1}}{\partial x^{2}} \frac{\partial x^{'2}}{\partial x^{2}} \bigg)(dx^{2})^{2} + \bigg[\bigg( \frac{\partial x^{'1}}{\partial x^{1}} \frac{\partial x^{'2}}{\partial x^{2}} \bigg) + \bigg( \frac{\partial x^{'1}}{\partial x^{2}} \frac{\partial x^{'2}}{\partial x^{1}} \bigg)\bigg]dx^{1}dx^{2} $

As is possible to see, the last two therms inside the brackets are almost the Jacobian determinant. So my question is: what is wrong in this approach?

I did found this article with a different ideia of the proof. But why should I use that instead of the mine? And also: how could I generalize the ideia of this article for N-dimensions?

2

There are 2 best solutions below

0
On BEST ANSWER

Following the tip given by @Neal and @Ted Shifrin I looked up more in the subject and I believe that I found a better (and more convincing) proof for the problem. Here in the Math Stackexchange.

The link for the (probable) answer is here!

0
On

This is a hint, to tweak or spark imagination.

Take F(x, y) with x = x(u, v) and y = y(u, v)

For single variable change, 'the' step is right because its one way relationship, (think graphically) about how the function of x will change if change the parameter t with x depending on t.

However with two parameters, the relation ship is more intricate. In the initial consideration, when u changes, then x and y both change and thus, the function changes in two respects, with x and with y. Think graphically about this. The first order derivatives come with the idea of local linear Approximation, that for very close inspection, a curve behaves like its tangent. So how does the tangents of x and y change with change in u and how does that influence change in the function F. It also might be helpful to think of matrices or linear transformations as they make graphical thinking easier when thinking about changes in lines and how they are mapped to be specific.

This idea can be taken further to n dimensions.