Can the implication $(x_1 = 0) \rightarrow (p(x_1,...,x_n) = 0)$ be encoded in a system of polynomial constraints in $\mathbb{C}[x_1,...,x_n]$?

35 Views Asked by At

Consider a set $S$ of polynomials in $\mathbb{C}[x_1,x_2,...,x_n]$, the polynomial ring of $n$ variables over the complex numbers.

The set $S$ can then be interpreted as a system of constraints on the variables $x_1,..., x_n$, where for all $p \in S$, $p(x_1,...,x_n) = 0$.

Many simple constraint statements can be encoded as a set of polynomials to include in the system of polynomial constraints. For example the constraint: $x_1 \in \{-1,0,1\}$, can be encoded with the polynomial $x_1^3 - x_1 = x_1(x_1+1)(x_1-1)$.

For a polynomial $f \in \mathbb{C}[x_1,...,x_n]$ and $s,t \in \mathbb{C}$, we can encode an implication of the form: $$(x_1 \neq s) \rightarrow (f(x_1,...,x_n) = t)$$ by including this polynomial in the constraint set: $$(x_1 - s)(f(x_1,...,x_n) - t).$$

However I do not know how to encode an implication with an equality on a variable without expanding to a larger polynomial ring.

For example, if we expand to a larger polynomial ring $\mathbb{C}[a,x_1,x_2,...,x_n]$ to give us an 'auxiliary' variable $a$, then it is possible to encode implications of the form: $$(x_1 = s) \rightarrow (f(x_1,...,x_n) = t)$$ by including this polynomial in the constraint set: $$(1 - a (x_1 - s))(f(x_1,...,x_n) - t).$$ When $x=s$, this reduces to $f(x_1,...,x_n) - t$ as desired, and when $x\neq s$, this can always be satisfied by $a=1/(x_1-s)$ without placing any further constraints on $x_1$ or the other variables.

There are other ways to encode such implications as polynomial constraints, but all methods I know of require expanding the polynomial ring to give some auxiliary variables. This led me to wondering...

  • Is it impossible to encode such constraints with polynomials without enlarging the polynomial ring?
    Can this be proven?

It feels like there should be a really simple answer here if viewed in the right way, but I can't think of a line of attack that wouldn't also rule out the possibility of it working with an expanded polynomial ring. And since we know that is possible, I'm not sure where to start.

2

There are 2 best solutions below

3
On BEST ANSWER

Here's a partial answer: sometimes and sometimes not; but this is only partial, because ideally we'd be able to say when you can and when you can't.

It's definitely possible for some polynomial $ p $, because here's an example where it can be done: when $ p $ is the zero polynomial. That's because the implication is always true, so we can use $ 0 = 0 $ as the constraint.

And it's definitely not possible for every polynomial $ p $, because here's an example where it can't be done: $ n = 1 $ and $ p ( x _ 1 ) = 1 $. That's because the implication is equivalent to the statement $ x _ 1 \ne 0 $, which has an infinite number of solutions (although not everything is a solution), yet any constraint of the form $ q ( x _ 1 ) = 0 $ can have only a finite number of solutions (when not everything is a solution). And a larger system of constraints could only have fewer solutions.

We could, of course, still ask for which polynomials $ p $ this can be done. I suspect that it can only be done when the implication $ x _ 1 = 0 \implies p ( x _ 1 , \ldots , x _ n ) = 0 $ is always true (for example, besides my first example, if $ p ( x _ 1 , x _ 2 ) = x _ 1 $ or $ p ( x _ 1 , x _ 2 , x _ 3 ) = x _ 1 ^ 2 ( x _ 2 + x _ 3 ) $). But I haven't proved that yet. You can't just do a counting argument for polynomials with more than one variable.


Side note: It's interesting to see how much logic we can do here. We can get a statement that's always true: $ 0 = 0 $. We can get a statement that's always false: $ 1 = 0 $. We can perform the disjunction of two statements by multiplying their polynomials: $ p ( x _ 1 , \ldots , x _ n ) q ( x _ 1 , \ldots , x _ n ) = 0 $ iff $ p ( x _ 1 , \ldots , x _ n ) = 0 $ or $ q ( x _ 1 , \ldots , x _ n ) = 0 $. (Note that $ x _ 1 \ne s \implies p ( x _ 1 , \ldots , x _ n ) = 0 $ is equivalent to $ x _ 1 = s $ or $ p ( x _ 1 , \ldots , x _ n ) = 0 $, so it's a special case of disjunction.)

Conjunction is a little trickier. You said a system of polynomial constraints, so we can do conjunction by simply putting both polynomials into the system. If we're working over a formally real field, then we can do conjunction by squaring and adding: $ p ( x _ 1 , \ldots , x _ n ) ^ 2 + q ( x _ 1 , \ldots , x _ n ) ^ 2 = 0 $ iff $ p ( x _ 1 , \ldots , x _ n ) = 0 $ and $ q ( x _ 1 , \ldots , x _ n ) = 0 $ in a formally real field. But I don't think that we can do it (in general) with a single polynomial over the complex numbers.

That leaves negation. Over a finite field, we can do it with $ \prod \limits _ { s \ne 0 } ( p ( x _ 1 , \ldots , x _ n ) - s ) $. We can't do it (in general) over an infinite field, since $ x _ 1 \ne 0 $ has infinitely many solutions (but not all solutions) and so can't be encoded with a system of polynomials over any infinite field.

And because we can do both disjunction and falsum, we can encode classical implication if and only if we can encode negation: $ \neg \, P \; \equiv \; ( P \implies \bot ) $, while $ P \implies Q \; \equiv \; \neg \, P \; \vee \; Q $.

1
On

Using @TobyBartels 's ideas for the $n=1$ case, it is possible to finish this off and prove his conjecture:

I suspect that it can only be done when the implication $ x _ 1 = 0 \implies p ( x _ 1 , \ldots , x _ n ) = 0 $ is always true

As suggested, if an implication is always true, then we can just encode it with the 0 polynomial.

However, if an implication of the form $$x_1 = 0 \implies p(x_1, \ldots, x_n) = 0$$ is not always true, then this means there is a finite sized ball in the variable space $\mathbb{C}^n$ centered on some point with $x_1 = 1$ such that the implication is true, yet there exists some point with $x_1 = 0$ where the implication is false.

To encode this as a polynomial would require the polynomial to be zero on a finite ball of its input set. Since a polynomial is analytic, this means it would have to be identically zero. And so cannot be non-zero at some other point. Trying to encode as multiple polynomials does not help, as this would require the intersection of their roots to cover that finite ball, and so individually they must as well, leading to the same problem.

Since this uses analyticity to prove, the finite field case is naturally able to escape this.


Furthermore, @TobyBartels 's side note gives new insight in what allows the "auxiliary variables" approach to work. The auxiliary variable allows encoding the negation of $(x_1 = 0)$, as a polynomial constraint $(1 - a x_1 = 0)$ with the side effect of also constraining $a$. This isn't specific to this particular encoding; to avoid the no-go situation above, this necessarily requires constraining another variable. Only in the exceptional case where there was already a constraint in the system that allowed a polynomial constraint to encode $x_1 \neq 0$, would it be possible to encode the implication without further restricting another variable.

To use this exceptional case would require some knowledge of the solution space of the system of constraints, and so while this option for encoding the implication is clearly not general, it also is not likely to be practically useful.

In short: an auxiliary variable is the way to go, and now I really feel like I understand why.