Scalar integrals in higher dimensions

137 Views Asked by At

The thing I want to do

The typical vector calculus course defines:

  1. A bunch of integrals of vector fields in $\mathbb R^2$ and $\mathbb R^3$: line integrals of a vector field along a curve, flux integrals of a vector field across a curve in $\mathbb R^2$, and flux integrals of a vector field across a surface in $\mathbb R^3$.
  2. A bunch of integrals of scalar functions in $\mathbb R^2$ and $\mathbb R^3$: here, we can just integrate any scalar function over any curve or surface.

I know that the integrals of vector fields can be generalized to integrals of differential forms. For example, the flux of a vector field $\mathbf F = M\,\mathbf i + N\,\mathbf j + P\,\mathbf k$ across a surface is equivalent to integrating $M\,\mathrm dy \wedge \mathrm dz + N\,\mathrm dz \wedge \mathrm dx + P \, \mathrm dx \wedge \mathrm dy$ over the surface. If the surface is given a parameterization, writing $x(u,v)$, $y(u,v)$, and $z(u,v)$ as functions of $u$ and $v$, then we can expand $\mathrm dx$ as $\frac{\partial x}{\partial u}\,\mathrm du + \frac{\partial x}{\partial v}\,\mathrm dv$, do the same for $\mathrm dy$ and $\mathrm dz$, and simplify the wedge products to get something we can integrate with respect to $u$ and $v$.

When I try to understand how to generalize scalar integrals, I run into trouble, because then I have to understand what a "metric tensor" or "Riemannian volume form" is, and I don't really understand those. However, I have come up with an approach that I do understand, and which seems to correctly handle all the special cases I am confident in.

The approach I'd like to verify

All the cases I understand seem to be based on the Jacobian determinant, which I'll write $\frac{\partial (x_1, x_2, \dots, x_n)}{\partial (u_1, u_2, \dots, u_n)}$, and it is the determinant of the matrix whose $(i,j)$ entry is $\frac{\partial x_i}{\partial y_j}$.

  • When integrating by substitution in higher dimensions, we multiply by the absolute value of this integral.
  • When integrating over a surface in $\mathbb R^3$, we multiply by the norm of a cross product of partial derivatives, but it simplifies to the expression $$\sqrt{\left(\frac{\partial(x,y)}{\partial(u,v)}\right)^2 + \left(\frac{\partial(x,z)}{\partial(u,v)}\right)^2 + \left(\frac{\partial(y,z)}{\partial(u,v)}\right)^2}.$$
  • When integrating over a curve in $\mathbb R^n$ parameterized by $\mathbb r(t)$, we multiply by $\left\|\frac{\mathrm d\mathbf r}{\mathrm dt}\right\|$. But we can think of the components of $\frac{\mathrm d\mathbf r}{\mathrm dt}$ as $1\times 1$ Jacobian determinants of each of $x_1, x_2, \dots, x_n$ individually with respect to $t$.

So what I'd like to do in general, to integrate over a $k$-dimensional object in $\mathbb R^n$ on which the variables $x_1, x_2, \dots, x_n$ are parameterized in terms of $u_1, u_2, \dots, u_k$, is:

  1. Write out all $\binom nk$ Jacobian determinants $\frac{\partial(x_{i_1}, x_{i_2}, \dots, x_{i_k})}{\partial(u_1, u_2, \dots, u_k)}$.
  2. Compute the norm of this $\binom nk$-dimensional vector: the square root of the sum of squares of these determinants.
  3. Integrate my scalar function, multiplied by this norm, with respect to $u_1, u_2, \dots, u_k$.

If this works, I would be very happy, because I would not need to know anything more than how to parameterize my $k$-dimensional object, and how to take partial derivatives.

I looked up a question about how to integrate over a surface in 4 dimensions, which is a special case of what I want to know. The answer there gives a formula that looks very different, but I checked in Mathematica and the contents of the square root simplify to the same thing. That's reassuring, but it doesn't tell me that my approach will continue working when integrating over a $5$-dimensional object in $17$ dimensions.

My question about this approach

Most importantly: does my approach work in general?

If it does work: am I overcomplicating things - is there a simpler way to compute the same quantity?

If it doesn't work: is there a correct method that's as concrete as my approach?

1

There are 1 best solutions below

0
On BEST ANSWER

Ultimately, your question boils down to two ways of calculating the $k$-dimensional volume of a $k$-dimensional parallelepiped in $\Bbb R^n$. The more standard one is the Gram determinant and it avoids any mention of exterior algebra. In your application, the matrix $A$ we consider will be the derivative matrix of your parametrization.

Let $v_1,\dots,v_k\in\Bbb R^n$ be linearly independent vectors, spanning a subspace $V\subset\Bbb R^n$. It is not unusual to ask for the $k$-dimensional volume of the parallelepiped $\mathscr P$ they span. Without discussing Hausdorff measures, a linear algebra student can proceed simply as follows: Extend $v_1,\dots,v_k$ to a basis for $\Bbb R^n$ by choosing $u_{k+1},\dots,u_n$ an orthonormal basis for the orthogonal complement $V^\perp$. Thinking of our usual "area of base times height" computations, the $k$-dimensional volume of $\mathscr P$ will be equal to the $n$-dimensional volume of the parallelepiped $\widetilde{\mathscr P}$ spanned by $v_1,\dots,v_k,u_{k+1},\dots,u_n$. Let $A$ be the $n\times k$ matrix whose columns are $v_1,\dots,v_k$, and let $B$ be the $n\times (n-k)$ matrix whose columns are $u_{k+1},\dots,u_n$; amalgamate these to form $C = [A|B]$. Then, as is well known, \begin{align*} \text{volume}(\mathscr P)^2=\text{volume}(\widetilde{\mathscr P})^2 &= (\det C)^2 = \det(C^\top C) = \det\left[\begin{array}{c|c} A^\top A & A^\top B \\ \hline B^\top A & B^\top B\end{array}\right] \\ &=\det\left[\begin{array}{c|c} A^\top A & O \\ \hline O & I_{n-k}\end{array}\right] = \det(A^\top A). \end{align*}

Your (more awkward) formula using the $\binom nk$ minors of the matrix $A$ is a generalization of the formula multivariable calculus students see, for example, when integrating over a graph. But it is a rather beautiful (and, to most, surprising) generalization of the Pythagorean Theorem. Given the parallelepiped $\mathscr P$, for any multi-index $I=(i_1i_2\cdots i_k)$ with $i_1<\dots<i_k$, we can consider the projection $P_I$ of $\mathscr P$ onto the $x_{i_1}x_{i_2}\dots x_{i_k}$-plane and compute its volume. Then the lovely result is that $$\text{volume}(\mathscr P)^2 = \sum_I \text{volume}(P_I)^2.\tag{$\star$}$$ (In the case $k=2$, $n=3$, this follows immediately from the formula for the cross product $v_1\times v_2$.) I see this as a fact from basic exterior algebra: We note that given the inner product on $\Bbb R^n$, there is an induced inner product on $\Lambda^k(\Bbb R^n)$ defined by declaring the set of $e_I = e_{i_1}\wedge\dots\wedge e_{i_k}$ with increasing $I$ to be an orthonormal basis. Then ($\star$) follows by expanding $v_1\wedge\dots\wedge v_k$ in terms of this orthonormal basis. See also the discussion of the Cauchy-Binet formula to deduce this matricially.