Integrating on a subspace of a scalar field = integrating on the orthogonal subspace of the Fourier transform

19 Views Asked by At

Suppose you have some function $f: \Bbb R^n \to \Bbb R$. You also have an injective linear transformation $A$ mapping from $\Bbb R^m \to \Bbb R^n$, with $m < n$, so that the image of $A$ is some $m$-dimensional subspace of $\Bbb R^n$. You would like to evaluate the following integral on all vectors in $\Bbb R^m$:

$$ \int_{-\infty}^\infty ... \int_{-\infty}^\infty f(A\vec u) du_1 ... du_m $$

where $\vec u = (u_1, u_2, ..., u_m)^T$.

Let's assume this integral exists, that it converges absolutely, that the Fourier transform of $f$ exists, and in general that it's as non-pathological as you could hope for (let's say continuous and piecewise-analytic, for instance). We then have an important theorem:

Theorem: the integral of the scalar field $f$ on this subspace, is equal to the integral of the Fourier transform of $f$ (if it exists) on the orthogonal complement of that subspace.

This is pretty easy to see by looking at the distributional Fourier transform. The integral we are talking about will be the inner product of $f$ with a certain distribution, which I called a "delta subspace distribution" in this related post. Thus, via Parseval's theorem, we get the same thing as if we take the inner product of the Fourier transforms, and the Fourier transform of this "delta subspace distribution" will be another delta subspace distribution whose support is the orthogonal complement of the original subspace. The result will be scaled in some way that depends on the choice of basis for $M$ and the choice of basis for our orthogonal complement.

Questions:

  1. This theorem seems very general. Does it have a name, and what is it called?
  2. The proof of this involves multidimensional distributions, but it's really a purely analytic theorem about multiple integrals of scalar fields. Is there some simple proof of this that doesn't require getting involved with multidimensional distributions and Parseval's theorem?
  3. Again, the result will need to be scaled in some way that depends on the bases chosen, both in the columns of $M$ as the first basis, and in the basis we choose for our orthogonal subspace. I would expect it has something to do with the volumes of the parallelotopes formed by the basis vectors. What, precisely, is it?

This is related to, but intended to be a spinoff of, this post where I go further into the distributional perspective and ask some technical questions about separability of distributions and such. This question, on the other hand, is really just about the analytic perspective: this is a very simple theorem, and it must have a name, and be provable without having to get so far into distributions. How does it work?