Testing whether polynomial is in algebra of given collection of polynomials

226 Views Asked by At

A collection $\Sigma$ of polynomials is an algebra if:

  1. $\lambda f + \eta g \in \Sigma$ for any $f,g \in \Sigma, \lambda,\eta \in \mathbb{R}$
  2. $f,g \in \Sigma$ implies $fg \in \Sigma$.

  3. $1 \in \Sigma$

We say that $P$ is in the algebra of $\{P_1,\dots,P_n\}$ if $P$ is in the smallest algebra containing $P_1,\dots,P_n$.

I was wondering if there was a way, on any computer math software, to check whether a given $P$ is in the algebra of a given collection $P_1,\dots,P_n$. I know Mathematica can check if $P$ is in the ideal generated by $P_1,\dots,P_n$, but I don't know about algebras, or about any software besides Mathematica (which I barely know).

Example: Take $n \ge 1$, and let $P_1 = x_1+\dots+x_n, P_2 = x_1^2+\dots+x_n^2,\dots P_n = x_1^n+\dots+x_n^n$. Then all $n$ of the following symmetric functions are in the algebra generated by $P_1,\dots,P_n$: $$x_1+\dots+x_n$$ $$x_1x_2+\dots+x_{n-1}x_n$$ $$x_1x_2x_3+\dots+x_{n-2}x_{n-1}x_n$$ $$\dots$$ $$x_1\dots x_n$$

I'd appreciate any help.

3

There are 3 best solutions below

5
On

First and foremost, I do not know of any software that solves this. But I do have some ideas that could be helpful.

Your example already includes multi variable polynomials, but let me first focus on single variable polynomials.

The algebra generated by $\{P_1,...,P_n\}$ is the infinite dimensional linear subspace of $\mathbb{R}[X]$ spanned by all monomials w.r.t. these polynomials, such as $P_1^5$ and $P_3^2P_5P_6$, but also $P_1$ and just $1$.

First of all, the case $n=1$ is very easy. All elements of the algebra are of the form $\lambda_0+\lambda_1P_1+\lambda_2P_1^2+...+\lambda_kP_1^k$ with $k=0$ or $\lambda_k\neq0$. Note that the degree of this polynomial is $k\cdot\mbox{deg}(P_1)$. This already shows that the degree of $P$ has to be a multiple of that of $P_1$. If this is the case, then you can figure out what $\lambda_k$ needs to be and subtract $\lambda_kP_1^k$ from $P$ to reduce the degree of $P$. Then simply repeat the process to determine whether $P$ belongs to the algebra generated by $P_1$.

When $n$ gets larger, the problem becomes a lot more difficult. Let us first consider the case where all $P_i$ are monomials $P_i(X)=X^{k_i}$. Then we require for every non-zero coefficient $\lambda_k$ of $P$ that $k$ can be written as a sum of numbers, with repetition allowed, from $\{k_1,...,k_n\}$. Reading up on the Frobenius problem makes me suspect this is already NP-complete with respect to $n$.

The more I think about the general problem, the more I suspect it to be undecidable. But here is an algorithm that should find a solution relatively quickly (see: polynomial in the degrees, but exponential in $n$) if one exists, and will run forever if there is no solution.

Generate all monomials w.r.t. the polynomials $P_1,...,P_n$ in order of their degree. This can be done efficiently with a priority queue. For every monomials you find, add it to the list of monomials so far. This list can be seen as a list of vectors in $\mathbb{R}^d$ with $d$ the maximum degree of the monomials thus far. Then we ask the question whether $P$ is a linear combination of these vectors.

Example: Consider $P(X)=X+2$, $P_1(X)=X+X^2$, $P_2(X)=X+X^3$. We find the following monomials with their corresponding vectors: \begin{align*} 1&&(1,0,0,0,0,0,0)\\ P_1&&(0,1,1,0,0,0,0)\\ P_2&&(0,1,0,1,0,0,0)\\ P_1^2&&(0,0,1,2,1,0,0)\\ P_1P_2&&(0,0,1,1,1,1,0)\\ P_2^2&&(0,0,1,0,2,0,1)\\ P_1^3&&(0,0,0,1,3,3,1)\\ \end{align*} At this point, we have $7$ linearly independent vectors in $7$ dimensions, so we can write $P$ as a linear combination of these monomials.

Note that the same algorithm can be used for multi variable polynomials. Although the algorithm will be a lot less efficient.

14
On

(This answer is in four sections. The first section is a Mathematica implementation of Daniel Schepler's answer. The second section describes use of built-in Mathematica functions to address the very symmetric Example in the Question. The third and fourth sections are Mathematica code for the general problem of finding an explicit reduction of a given polynomial into the algebra spanned by a given set of polynomials without using the machinery of Groebner bases. The third section, implementing the multivariable version of SmileyCraft's answer, expands the products of powers of generators by total degree $1$, which is likely to take less time and memory to find a reduction than the fourth section's code (which was written at the same time and without knowledge of SmileyCraft's answer), which expands by taking all products of pairs of the current partial spanning set.)

inAlgebra[gens_List, vars_List, target_, gensSymbol_Symbol: None] := 
  (*  Adapted by Eric Towers from a description in https://math.stackexchange.com/a/3516363/123905  *)
  Module[{
      P, kernelGens, answerRels
    },
    kernelGens = GroebnerBasis[Join[Array[P, Length[gens]] - gens, {P - target}], 
      Join[Array[P, Length[gens]], {P}], vars, MonomialOrder -> EliminationOrder];
    answerRels = Simplify[P /. Solve[# == 0, P]] & /@ 
      Select[kernelGens, And[NumericQ[D[#, P]], D[#, P] != 0] &];
    Flatten[ReplaceAll[
      answerRels,
      Rule @@ # & /@ Transpose[{Array[P, Length[gens]], 
        If[gensSymbol === None, gens, Array[gensSymbol, Length[gens]]]}]
    ], 1]
  ]

This version adds an option not present in the prior versions: output can be in terms of powers of an indexed symbol, rather than literally the generators. The fourth argument is optional. If it is not given, or is given as None, then the reduction of the target polynomial to a linear combination of products of powers of the generators is given explicitly. If the fourth argument is a Symbol, P for instance, then the output is written as a linear combination of products of powers of that symbol indexed by the ordinal of the generator in the gens argument. Example:

inAlgebra[{x1 + x2, x1 x2}, {x1, x2}, x1^2 + x2^2]
inAlgebra[{x1 + x2, x1 x2}, {x1, x2}, x1^2 + x2^2, None]
inAlgebra[{x1 + x2, x1 x2}, {x1, x2}, x1^2 + x2^2, P]
(*  {-2 x1 x2 + (x1 + x2)^2}  *)
(*  {-2 x1 x2 + (x1 + x2)^2}  *)
(*  {P[1]^2 - 2 P[2]}  *)

Here, P[1] is the first generator, x1 + x2, and P[2] is the second generator, x1 x2.

Now the other examples, doubled up using the new optional argument.

inAlgebra[{x1 + x2, x1^2 + x2^2}, {x1, x2}, x1 x2]
inAlgebra[{x1 + x2, x1^2 + x2^2}, {x1, x2}, x1 x2, Gen]
inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 + x1 x3 + x2 x3]
inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 + x1 x3 + x2 x3, P]
inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 x3]
inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 x3, T]
(*  {1/2 (-x1^2 - x2^2 + (x1 + x2)^2)}  *)
(*  {1/2 (Gen[1]^2 - Gen[2])}  *)
(*  {1/2 (-x1^2 - x2^2 - x3^2 + (x1 + x2 + x3)^2)}  *)
(*  {1/2 (P[1]^2 - P[2])}  *)
(*  {1/6 ((x1 + x2 + x3)^3 - 3 (x1 + x2 + x3) (x1^2 + x2^2 + x3^2) + 2 (x1^3 + x2^3 + x3^3))}  *)
(*  {1/6 (T[1]^3 - 3 T[1] T[2] + 2 T[3])}  *)

The Example has a lot more structure than the general problem the Question starts with. The following Mathematica code expresses the elementary symmetric polynomials in terms of the power sum symmetric polynomials much more directly than computing the $\Bbb{R}$-span of successive approximations to the monoid of powers of the generators. This uses the fundamental theorem of symmetric polynomials -- that any symmetric polynomial can be written as a polynomial in the elementary symmetric polynomials. Then, we invert the relation to write the elementary symmetric polyomials in terms of the power sum symmetric polynomials.

elementariesAsSymmetricPowerSums[n_] := Module[{
    vars,
    powerSumsymmetricPolynomials,
    elementarySymmetricReductions
  },
  vars = Array[x, n];
  powerSumsymmetricPolynomials = Table[Sum[x[i]^power, {i, 1, n}], {power, 1, n}];
  elementarySymmetricReductions = 
    SymmetricReduction[#, vars, Array[(esp[n, #] &), n]][[1]] & /@ 
      powerSumsymmetricPolynomials;
  Solve[Array[pssp[n, #] &, n] == elementarySymmetricReductions, Array[(esp[n, #] &), n]]
]

Here, we use the symbols \begin{align*} \mathrm{pssp}[v,p] &= \sum_{i=1}^v x_i^p \\ \mathrm{esp}[v,p] &= \sum_{1 \leq i_1 \leq i_2 \leq \cdots \leq i_p \leq v} x_{i_1} x _{i_2}\cdots x_{i_p} \text{,} \end{align*} where $v$ is the number of variables, $p$ is the total degree of terms in the polynomial, $\mathrm{pssp}$ gives power sum symmetric polynomials, and $\mathrm{esp}$ gives elementary symmetric polynomials.

elementariesAsSymmetricPowerSums[3]
(*  {{esp[3, 1] -> pssp[3, 1], 
      esp[3, 2] -> 1/2 (pssp[3, 1]^2 - pssp[3, 2]), 
      esp[3, 3] -> 1/6 (pssp[3, 1]^3 - 3 pssp[3, 1] pssp[3, 2] + 2 pssp[3, 3])}}  *)

This says \begin{align*} x_1 + x_2 + x_3 &= x_1 + x_2 + x_3 \\ x_1 x_2 + x_1 x_3 + x_2 x_3 &= \frac{1}{2}(x_1 + x_2 + x_3)^2 - (x_1^2 + x_2^2 + x_3^2) \\ x_1 x_2 x_3 &= \frac{1}{6}(x_1 + x_2 + x_3)^3 - 3(x_1 + x_2 + x_3)(x_1^2 + x_2^2 + x_3^2) + 2 (x_1^3 + x_2^3 + x_3^3) \end{align*}

This method should be able to handle much larger instances quickly. As an example...

elementariesAsSymmetricPowerSums[8]
(*  {{esp[8, 1] -> pssp[8, 1], 
      esp[8, 2] -> 1/2 (pssp[8, 1]^2 - pssp[8, 2]), 
      esp[8, 3] -> 1/6 (pssp[8, 1]^3 - 3 pssp[8, 1] pssp[8, 2] + 2 pssp[8, 3]), 
      esp[8, 4] -> 1/24 (pssp[8, 1]^4 - 6 pssp[8, 1]^2 pssp[8, 2] + 3 pssp[8, 2]^2 + 8 pssp[8, 1] pssp[8, 3] - 6 pssp[8, 4]), 
      esp[8, 5] -> 1/120 (pssp[8, 1]^5 - 10 pssp[8, 1]^3 pssp[8, 2] + 15 pssp[8, 1] pssp[8, 2]^2 + 20 pssp[8, 1]^2 pssp[8, 3] - 20 pssp[8, 2] pssp[8, 3] - 30 pssp[8, 1] pssp[8, 4] + 24 pssp[8, 5]), 
      esp[8, 6] -> 1/720 (pssp[8, 1]^6 - 15 pssp[8, 1]^4 pssp[8, 2] + 45 pssp[8, 1]^2 pssp[8, 2]^2 - 15 pssp[8, 2]^3 + 40 pssp[8, 1]^3 pssp[8, 3] - 120 pssp[8, 1] pssp[8, 2] pssp[8, 3] + 40 pssp[8, 3]^2 - 90 pssp[8, 1]^2 pssp[8, 4] + 90 pssp[8, 2] pssp[8, 4] + 144 pssp[8, 1] pssp[8, 5] - 120 pssp[8, 6]), 
      esp[8, 7] -> (1/5040)(pssp[8, 1]^7 - 21 pssp[8, 1]^5 pssp[8, 2] + 105 pssp[8, 1]^3 pssp[8, 2]^2 - 105 pssp[8, 1] pssp[8, 2]^3 + 70 pssp[8, 1]^4 pssp[8, 3] - 420 pssp[8, 1]^2 pssp[8, 2] pssp[8, 3] + 210 pssp[8, 2]^2 pssp[8, 3] + 280 pssp[8, 1] pssp[8, 3]^2 - 210 pssp[8, 1]^3 pssp[8, 4] + 630 pssp[8, 1] pssp[8, 2] pssp[8, 4] - 420 pssp[8, 3] pssp[8, 4] + 504 pssp[8, 1]^2 pssp[8, 5] - 504 pssp[8, 2] pssp[8, 5] - 840 pssp[8, 1] pssp[8, 6] + 720 pssp[8, 7]), 
      esp[8, 8] -> (1/40320)(pssp[8, 1]^8 - 28 pssp[8, 1]^6 pssp[8, 2] + 210 pssp[8, 1]^4 pssp[8, 2]^2 - 420 pssp[8, 1]^2 pssp[8, 2]^3 + 105 pssp[8, 2]^4 + 112 pssp[8, 1]^5 pssp[8, 3] - 1120 pssp[8, 1]^3 pssp[8, 2] pssp[8, 3] + 1680 pssp[8, 1] pssp[8, 2]^2 pssp[8, 3] + 1120 pssp[8, 1]^2 pssp[8, 3]^2 - 1120 pssp[8, 2] pssp[8, 3]^2 - 420 pssp[8, 1]^4 pssp[8, 4] + 2520 pssp[8, 1]^2 pssp[8, 2] pssp[8, 4] - 1260 pssp[8, 2]^2 pssp[8, 4] - 3360 pssp[8, 1] pssp[8, 3] pssp[8, 4] + 1260 pssp[8, 4]^2 + 1344 pssp[8, 1]^3 pssp[8, 5] - 4032 pssp[8, 1] pssp[8, 2] pssp[8, 5] + 2688 pssp[8, 3] pssp[8, 5] - 3360 pssp[8, 1]^2 pssp[8, 6] + 3360 pssp[8, 2] pssp[8, 6] + 5760 pssp[8, 1] pssp[8, 7] - 5040 pssp[8, 8])}}  *)

The above is very specific to your example. But if your example is very close to your more general problems, this might be a better starting place than the generic code, below.


The first version of the general purpose search through the span of products of powers of generators of low total degree was very inefficient in expanding the collection of products of powers of generators. Rather than generate the new terms by (frequently redundantly) multiplying pairs of members of $\Sigma$. Instead, we use nonnegative integer compositions to directly construct the exponent vectors in the products of powers, so we directly generate all the terms of specific total degree at once, without producing any duplicates that we must subsequently remove.

inAlgebra[gens_List, vars_List, target_] :=
  Module[{
    C,
    compositions, expansion,
    partialRSpanningSet, realSolution,
    iterationDepth, attemptedSolution
    },
    compositions[total_, parts_] :=
      Flatten[Table[
          Join[{k}, #] & /@ compositions[total - k, parts - 1],
          {k, 0, total}
        ], 1];
    compositions[total_, 1] := {{total}};
    expansion[set_List, totalOrder_] := (Times @@ Power[set, #]) & /@ compositions[totalOrder, Length[set]];

    realSolution[set_] := 
      SolveAlways[C[0] + Table[C[i], {i, 1, Length[set]}].set == target, vars];

    iterationDepth = 1;
    partialRSpanningSet = Union[{}, expansion[gens, iterationDepth]];
    attemptedSolution = realSolution[partialRSpanningSet];
    While[Not[And[Head[attemptedSolution] === List, Length[attemptedSolution] > 0]],
      iterationDepth++;
      If[iterationDepth > $IterationLimit, 
        Print["$IterationLimit exceeded.  See documentation for $IterationLimit to see how to increase $IterationLimit in a Block[]."];
        Abort[];
      ];
      partialRSpanningSet = Join[partialRSpanningSet, expansion[gens, iterationDepth]];
      attemptedSolution = realSolution[partialRSpanningSet];
    ];
    (C[0] + Table[C[i], {i, 1, Length[partialRSpanningSet]}].partialRSpanningSet) /. attemptedSolution
  ]

Checking output again...

inAlgebra[{x1 + x2, x1^2 + x2^2}, {x1, x2}, x1 x2]
inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 + x1 x3 + x2 x3]
inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 x3]
(*  {1/2 (x1 + x2)^2 + 1/2 (-x1^2 - x2^2)}  *)
(*  {1/2 (x1 + x2 + x3)^2 + 1/2 (-x1^2 - x2^2 - x3^2)}  *)
(*  {1/6 (x1 + x2 + x3)^3 - 1/2 (x1 + x2 + x3) (x1^2 + x2^2 + x3^2) + 1/3 (x1^3 + x2^3 + x3^3)}  *)

(This is the first version of Mathematica code to reduce a given polynomial into a linear combination of products of powers of a given set of generators. A sometimes quicker and very likely less memory-hungry version appears above.)

The following Mathematica code does what you are requesting.

inAlgebra[gens_List, vars_List, target_] :=
  Module[{C,
      iterate,
      partialRSpanningSet, realSolution,
      iterationDepth, attemptedSolution
    },
    iterate[set_List] := Union[set,
      Reap[
        Table[
          Sow[Times[set[[f]], set[[g]] ]],
          {f, 1, Length[set]}, {g, f, Length[set]}]][[2, 1]]
      ];

    realSolution[set_] := 
      SolveAlways[C[0] + Table[C[i], {i, 1, Length[set]}].set == target, 
vars];

    partialRSpanningSet = gens;
    iterationDepth = 0;
    attemptedSolution = realSolution[partialRSpanningSet];
    While[Not[And[Head[attemptedSolution] === List, 
        Length[attemptedSolution] > 0]],
      partialRSpanningSet = iterate[partialRSpanningSet];
      iterationDepth++;
      If[iterationDepth > $IterationLimit, 
        Print[
          "$IterationLimit exceeded.  See documentation for $IterationLimit to see how to increase $IterationLimit in a Block[]."];
        Abort[];
      ];
      attemptedSolution = realSolution[partialRSpanningSet];
    ];
    (C[0] + Table[C[i], {i, 1, Length[partialRSpanningSet]}].partialRSpanningSet) /. attemptedSolution
]

It is based on the following observation: a product of linear combinations of elements of $\Sigma$ is a linear combination of products of powers of the $P_i$. So, first we look for a linear combination of the $P_i$ that gives your target polynomial. Then we look for linear combinations among the products of $\leq 2$ of the $P_i$, then among the products of $\leq 3$ of the $P_i$, continuing until either we find a solution, abort due to excessive iteration, or are externally terminated. Uses:

inAlgebra[{x1 + x2, x1^2 + x2^2}, {x1, x2}, x1 x2]
(*  {1/2 (x1 + x2)^2 + 1/2 (-x1^2 - x2^2)}  *)


inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 + x1 x3 + x2 x3]
(*  {1/2 (x1 + x2 + x3)^2 + 1/2 (-x1^2 - x2^2 - x3^2)}  *)


inAlgebra[{x1 + x2 + x3, x1^2 + x2^2 + x3^2, x1^3 + x2^3 + x3^3}, {x1, x2, x3}, x1 x2 x3]
(*  {1/6 (x1 + x2 + x3)^3 - 1/2 (x1 + x2 + x3) (x1^2 + x2^2 + x3^2) + 1/3 (x1^3 + x2^3 + x3^3)}  *)
1
On

Suppose we want to determine whether $q \in \mathbb{R}[x_1, \ldots, x_r]$ is in the subalgebra generated by $p_1, \ldots, p_n$. The general theory of Groebner bases allows us to find a Groebner basis of the kernel of the $\mathbb{R}$-algebra homomorphism $$\mathbb{R}[t_1, \ldots, t_n, s] \to \mathbb{R}[x_1, \ldots, x_r], t_i \mapsto p_i, s \mapsto q.$$ Furthermore, if the monomial order we chose has $s > t_1^{d_1} \cdots t_r^{d_r}$ for all $d_1, \ldots, d_r \ge 0$, then $q$ will be in the subalgebra generated by $p_1, \ldots, p_n$ if and only if one of the elements of this Groebner basis of the kernel is equal to $\lambda s - F(t_1, \ldots, t_n)$ for some $\lambda \in \mathbb{R}^*, F \in \mathbb{R}[x_1, \ldots, x_r]$.

If some specific computer algebra system does not have a builtin function for finding kernels, but it does have a general Groebner basis calculation routine, then you can find the required kernel by calculating a Groebner basis for the ideal $\langle t_i - p_i, s - q \rangle$ of $\mathbb{R}[t_1, \ldots, t_n, s, x_1, \ldots, x_r]$, using an elimination order such that the monomials of $\mathbb{R}[t_1, \ldots, t_n, s]$ are smaller than any monomials containing a positive power of $x_j$. Then take the elements of the Groebner basis of this ideal which lie in $\mathbb{R}[t_1, \ldots, t_n, s]$, and they will form the desired Groebner basis of the kernel.


This does have the disadvantage of having to do a new Groebner basis calculation for each new $q$, even if $p_1, \ldots, p_n$ are fixed. However, we can also produce a variant of the algorithm which allows you to perform the Groebner basis calculation once for fixed $p_1, \ldots, p_n$ with $q$ varying: find a Groebner basis of the ideal $I = \langle t_i - p_i \rangle$ of $\mathbb{R}[t_1, \ldots, t_n, x_1, \ldots, x_r]$ which respect to an elimination order where the monomials in $t_1, \ldots, t_n$ are smaller than any monomial with a positive power of some $x_j$. Now, given $q$, the reduction process of dividing $q$ (or technically $1 \otimes q \in \mathbb{R}[t_1, \ldots, t_n] \otimes_{\mathbb{R}} \mathbb{R}[x_1, \ldots, x_r] \simeq \mathbb{R}[t_1, \ldots, t_n, x_1, \ldots, x_r]$) by elements of the Groebner basis will give either zero or the representative of $(1 \otimes q) + I$ with minimal leading monomial. Then $q$ is in the subalgebra generated by $p_1, \ldots, p_n$ if and only if that remainder of $(1 \otimes q) \mathop{\mathrm{mod}} I$ is in $\mathbb{R}[t_1, \ldots, t_n]$ (if and only if the leading monomial in the remainder is in $\mathbb{R}[t_1, \ldots, t_n]$).