Construction of a polynomial with specified integral over several regions

42 Views Asked by At

This is appearing in the context of the finite element method. We want to find a $n$-variate polynomial, of order $\leq m-1$ in each variable, such that its integral on each of various subsets of the unit cube is equal to some given list of numbers. Instead of making an polynomial interpolant, which takes specified values at specified points (e.g. Lagrange polynomials), I want a polynomial approximant whose integral is specified over certain regions.

Define an index set $\mathcal{A}_{m,n} = \{0,..,m-1\}^n$. We divide the unit cube $[0,1]^n$ into $m^2$ subcubes, each a translated copy of $[0,\tfrac{1}{m}]^n$. For each $\vec{k} \in \mathcal{A}_{m,n} $, define the subcube $$ E_{\vec{k}} = [\tfrac{k_1}{m},\tfrac{k_1+1}{m}]\times[\tfrac{k_2}{m},\tfrac{k_2+1}{m}]\times \cdots \times [\tfrac{k_n}{m},\tfrac{k_n+1}{m}] $$ Then define, for each $\vec{k} \in \mathcal{A}_{m,n}$, a functional on $L^1_\text{loc}(\mathbb{R}^n)$ $$ \sigma_\vec{k} : \phi \mapsto \int_{E_\vec{K}} \phi(x_1,..,x_n) dx$$ Next define $\mathcal{P}_{m,n} = \left\{ p = \sum_{\vec{k}\in\mathcal{A}_{m,n}} p_\vec{k} \vec{x}^\vec{k} \right\}$, where $\vec{x}^\vec{k} = x_1^{k_1}\cdots x_n^{k_n}$.

The problem is to prove that $\{ \sigma_\vec{k} \}_{\vec{k} \in \mathcal{A}_{m,n}}$ forms a basis for the dual space of $\mathcal{P}_{m,n}$. Equivalently that if $p \in \mathcal{P}_{m,n}$ such that $$ \sigma_{\vec{k}}(p) = 0 \qquad \text{ for all } \vec{k} \in \mathcal{A}_{m,n} $$ Then necessarily $p \equiv 0$ i.e. $p_\vec{k} = 0$ for all $\vec{k}$.

Similar results are proved all the time in the context of the finite element method, in the case where the functionals are instead evaluation at some set of points. Usually this can be done with either some straightforward linear algebra or some polynomial factorization tricks. I have a proof for the integral functionals in one dimension, using the explicit value of the monomials to make the following matrix, and proving that it is similar to a Vandermonde matrix with nonzero determinant: $$ M_k^l = \sigma_k(x^l) = \int_{\frac{k}{m}}^\frac{k+1}{m} x^l = \frac{(k+1)^{l+1} - k^{l+1}}{(l+1)m^{l+1}} $$ The issue in more than one dimension is that making a corresponding matrix for $n\geq 2$ requires an (arbitrary) enumeration of the multi-indices. For example, let $\{\alpha^i\}_{i=1}^{m^n} = \mathcal{A}_{m,n}$ where $$\text{ for each } i, \ \exists \vec{k} \text{ s.t. } \ \ \alpha^i = (\alpha_1^i,..,\alpha_n^i) = (k_1,..,k_n) = \vec{k} \text{ for some } \vec{k} \in \mathcal{A}_{m,n} $$ Then we can make the matrix $$ M_i^j = \sigma_{\alpha^i}(x^{\beta^j}) $$ Then the problem is the invertibility of this matrix, but I do not see an obvious Vandermonde structure, or any symmetry except under permutation of indices: $$ \sigma_{\alpha^i}(x^{\beta^j}) = \sigma_{\tau(\alpha^i)}(x^{\tau(\beta^j)}) \qquad \text{ for all } \tau \in S_n$$ Where $\tau(\alpha^i) = (\alpha_{\tau(1)}^i,..,\alpha_{\tau(n)}^i)$ A perspective I've been considering is to view $M$ as certain tensor or multilinear map, which takes a polynomial coefficient tensor $P_{j_1..j_n} \in \bigotimes^n \mathbb{R}^m$ and acts by contraction to output a tensor $M_{k_1..k_n}^{j_1..j_n}P_{j_1..j_n} \in \bigotimes^n \mathbb{R}^m$ of integrals of that polynomial on each subcube. Then this problem might be about invertibility of this tensor in the sense that $M_{k_1..k_n}^{j_1..j_n}(M^{-1})_{j_1..j_n}^{l_1..l_n} = \delta_{k_1..k_n}^{l_1..l_n}$. But I don't have a good handle on how to think about this and whether there are methods to prove this invertibility, or if this is even right to begin with. If you will excuse returning to the previous notation, the entries of this tensor look like $$ \begin{align*} M_{\vec{k}}^{\vec{j}} = M_{k_1 \cdots k_n}^{j_1 \cdots j_n} &= \sigma_{\vec{k}}(x^{\vec{j}}) \\ &= \int_{\frac{k_1}{m}}^{\frac{k_1+1}{m}} \cdots \int_{\frac{k_n}{m}}^{\frac{k_n+1}{m}} x_1^{j_1} \cdots x_n^{j_n} dx \\ &= \prod_{i = 1}^n \frac{(k_i+1)^{j_i+1} - k_i^{j_i+1}}{(j_i+1)m^{j_i+1}} \end{align*} $$ The more straightforward approach would be to prove that $M_{ij}$ (with the enumeration) has nonzero determinant, but the matrix representation is unwieldy and unnatural - its symmetries are obscured because they are 'broken' by the arbitrary choices in the enumeration, which relies on imposing a total order on the multi-indices.

My question for you: How can I prove that this matrix (tensor?) is invertible? Or how can I prove that there exist such polynomials? Is there a way to leverage the one-dimensional case to prove higher dimensions? Even just your thoughts on possible approaches would be appreciated.