Consider a set $S$ of polynomials in $\mathbb{C}[x_1,x_2,...,x_n]$, the polynomial ring of $n$ variables over the complex numbers.
The set $S$ can then be interpreted as a system of constraints on the variables $x_1,..., x_n$, where for all $p \in S$, $p(x_1,...,x_n) = 0$.
Many simple constraint statements can be encoded as a set of polynomials to include in the system of polynomial constraints. For example the constraint: $x_1 \in \{-1,0,1\}$, can be encoded with the polynomial $x_1^3 - x_1 = x_1(x_1+1)(x_1-1)$.
For a polynomial $f \in \mathbb{C}[x_1,...,x_n]$ and $s,t \in \mathbb{C}$, we can encode an implication of the form: $$(x_1 \neq s) \rightarrow (f(x_1,...,x_n) = t)$$ by including this polynomial in the constraint set: $$(x_1 - s)(f(x_1,...,x_n) - t).$$
However I do not know how to encode an implication with an equality on a variable without expanding to a larger polynomial ring.
For example, if we expand to a larger polynomial ring $\mathbb{C}[a,x_1,x_2,...,x_n]$ to give us an 'auxiliary' variable $a$, then it is possible to encode implications of the form: $$(x_1 = s) \rightarrow (f(x_1,...,x_n) = t)$$ by including this polynomial in the constraint set: $$(1 - a (x_1 - s))(f(x_1,...,x_n) - t).$$ When $x=s$, this reduces to $f(x_1,...,x_n) - t$ as desired, and when $x\neq s$, this can always be satisfied by $a=1/(x_1-s)$ without placing any further constraints on $x_1$ or the other variables.
There are other ways to encode such implications as polynomial constraints, but all methods I know of require expanding the polynomial ring to give some auxiliary variables. This led me to wondering...
- Is it impossible to encode such constraints with polynomials without enlarging the polynomial ring?
Can this be proven?
It feels like there should be a really simple answer here if viewed in the right way, but I can't think of a line of attack that wouldn't also rule out the possibility of it working with an expanded polynomial ring. And since we know that is possible, I'm not sure where to start.
Here's a partial answer: sometimes and sometimes not; but this is only partial, because ideally we'd be able to say when you can and when you can't.
It's definitely possible for some polynomial $ p $, because here's an example where it can be done: when $ p $ is the zero polynomial. That's because the implication is always true, so we can use $ 0 = 0 $ as the constraint.
And it's definitely not possible for every polynomial $ p $, because here's an example where it can't be done: $ n = 1 $ and $ p ( x _ 1 ) = 1 $. That's because the implication is equivalent to the statement $ x _ 1 \ne 0 $, which has an infinite number of solutions (although not everything is a solution), yet any constraint of the form $ q ( x _ 1 ) = 0 $ can have only a finite number of solutions (when not everything is a solution). And a larger system of constraints could only have fewer solutions.
We could, of course, still ask for which polynomials $ p $ this can be done. I suspect that it can only be done when the implication $ x _ 1 = 0 \implies p ( x _ 1 , \ldots , x _ n ) = 0 $ is always true (for example, besides my first example, if $ p ( x _ 1 , x _ 2 ) = x _ 1 $ or $ p ( x _ 1 , x _ 2 , x _ 3 ) = x _ 1 ^ 2 ( x _ 2 + x _ 3 ) $). But I haven't proved that yet. You can't just do a counting argument for polynomials with more than one variable.
Side note: It's interesting to see how much logic we can do here. We can get a statement that's always true: $ 0 = 0 $. We can get a statement that's always false: $ 1 = 0 $. We can perform the disjunction of two statements by multiplying their polynomials: $ p ( x _ 1 , \ldots , x _ n ) q ( x _ 1 , \ldots , x _ n ) = 0 $ iff $ p ( x _ 1 , \ldots , x _ n ) = 0 $ or $ q ( x _ 1 , \ldots , x _ n ) = 0 $. (Note that $ x _ 1 \ne s \implies p ( x _ 1 , \ldots , x _ n ) = 0 $ is equivalent to $ x _ 1 = s $ or $ p ( x _ 1 , \ldots , x _ n ) = 0 $, so it's a special case of disjunction.)
Conjunction is a little trickier. You said a system of polynomial constraints, so we can do conjunction by simply putting both polynomials into the system. If we're working over a formally real field, then we can do conjunction by squaring and adding: $ p ( x _ 1 , \ldots , x _ n ) ^ 2 + q ( x _ 1 , \ldots , x _ n ) ^ 2 = 0 $ iff $ p ( x _ 1 , \ldots , x _ n ) = 0 $ and $ q ( x _ 1 , \ldots , x _ n ) = 0 $ in a formally real field. But I don't think that we can do it (in general) with a single polynomial over the complex numbers.
That leaves negation. Over a finite field, we can do it with $ \prod \limits _ { s \ne 0 } ( p ( x _ 1 , \ldots , x _ n ) - s ) $. We can't do it (in general) over an infinite field, since $ x _ 1 \ne 0 $ has infinitely many solutions (but not all solutions) and so can't be encoded with a system of polynomials over any infinite field.
And because we can do both disjunction and falsum, we can encode classical implication if and only if we can encode negation: $ \neg \, P \; \equiv \; ( P \implies \bot ) $, while $ P \implies Q \; \equiv \; \neg \, P \; \vee \; Q $.