It is possible to convert a logical expression into elementary algebra. That is, there are substitutions that convert an expression such as $$ \left( \left(a \rightarrow b\right) \wedge \left(c \rightarrow d\right) \right) \rightarrow \left(a \vee b \right) $$ into an equivalent elementary algebra expression $$a+b+c-ab-ac-bc-cd+abc+acd+bcd-abcd$$ - The substitutions used were $x\wedge y = xy$ and $x \rightarrow y = 1-x(1-y)$
These 2 formulae are equivalent in the sense that if a collection of $\{T,F\}$ satisfy the logical formula, then the corresponding substitution of $1$s and $0$s will make the algebraic expression $1$, otherwise $0$.
I know that solving a $3$-SAT problem is NP-complete and I can convert a logical formula into an algebraic expression. Once I have the algebraic expression I can say how many ways it can be satisfied and I have an algorithm that produces all the solutions using only subtraction. So my question is: Is solving a boolean polynomial NP-complete? Or does the 'difficult' part lie in the conversion from Logic to algebra?
Thanks in advance, Ben
Apply these substitution rules to any logical expression recursively and you get a corresponding algebraic expression. The number of possible normalised algebraic expresions of $v$ variables (in $\mathbb{Z}_2$) is equal to the number of functions from ${\mathbb{Z}_2}^v$ to $2$. Therefore a formula is unsatisfiable if and only if its normalised algebraic expression (one for which no further substitutions can be made) is 0. So, contrary to intuition, the difficult part is bringing the logic formula to a normalised algebraic form.
$\left( \left(a \rightarrow b\right) \wedge \left(c \rightarrow d\right) \right) \rightarrow \left(a \vee b \right)\implies (ab -a + 1)(cd - c + 1) \rightarrow (a + b - ab)\implies $ $(abcd - abc + ab - acd + ac + cd - c - a + 1) \rightarrow (a + b - ab) \implies $ $(abcd - abc + ab - acd + ac + cd - c - a + 1) (a + b - ab - 1) + 1 \implies$ $-a^2 b^2 c d + a^2 b^2 c + a^2 (-b^2) + 2 a^2 b c d - 2 a^2 b c + 2 a^2 b - a^2 c d + a^2 c - a^2 + a b^2 c d - a b^2 c + a b^2 - 3 a b c d + 3 a b c - 3 a b + 2 a c d - 2 a c + 2 a + b c d - b c + b - c d + c$
Note that in would be far easier to work in $\mathbb{Z}_2$ as the even coefficients would disappear and the plus would become minus. Applying rule 5 to the last expression we have:
$-abcd + abc - ab + 2abcd - 2 abc + 2ab - acd + ac - a + abcd - abc + a b - 3 abcd + 3 abc - 3 ab + 2 acd - 2 ac + 2 a + b c d - b c + b - c d + c \implies$ $a+b+c-ab-ac-bc-cd+abc+acd+bcd-abcd$
It can be proven that said expression is unsatisfied if and only if it is equal to $0$ (constant): by the counting argument in $\mathbb{Z}_2$ a normalised formula is unsatisfiable if it's $\mathbb{Z}_2$ reduction (replacing all the coefficients by their mod-2 remainders) is $1$. We must prove that if the $\mathbb{Z}_2$ reduction of a normalized formula is $0$ then the formula is $0$. It can be proven inductively that the only normalized polynomial of $v$ variables over $\mathbb{Z}$ which gives $0$ for all possible values in $\{0,1\}^{v}$ is 0.
So the difficult part is computing the normalized algebraic formula (there are $2^{v}$ terms, after all), not checking if it is solvable.
Edit:it's true that you can compute a polynomial for an expression quickly if you avoid doing any exponent simplifications (rule 5) or grouping of identical terms. I suspect most of the difficulty lies there. Also the $\mathbb{Z}_2$-reduction of said polynomial is called the Zhegalkin polynomial.
P vs NP: I've been doing some examples. The simplification part is not even required for NP time. It is reasonable to assume that the multiplication of a sum of $p$ terms with a sum of $k$ terms takes O(pk). Now consider the expression $(1-a_1)(1-a_2)...(1-a_n)$. It's reasonable to ascribe to it length $2n$ . Now, by expanding it, multiplying recursively, we see computing the whole expression takes around $O(2^n)$. Whereas verifying that $\{a_{1} = 0, a_2 = 0, ...a_n = 0\}$ satisfies (gives 1) only takes $O(n)$ time: we compute the singular terms of the sub-expressions before proceeding to multiply: $\{(1-a_1)\rightarrow 1, (1-a_2)\rightarrow 1,...(1-a_n)\rightarrow 1\}$, which is something we can't do in the symbolic case: we don't know the values of the terms.
Now, you might ask yourself: "Wait a minute. We don't need to compute the whole expanded expression. We just need to check if it's equal to $0$ or not. And just by looking at $(1-a_1)(1-a_2)...(1-a_n)$, by synthetic reasoning, we see that it's not equal to $0$. We can be clever".
Fair enough. But the question is, can you write an algorithm, clever enough, that always takes polynomial time to check whether an arbitrary expression is equal to 0? They you've proved P=NP . On the other hand, let's say this isn't so. Can you prove that no possible algorithm can be clever enough such as to check an arbitrary expression always in polynomial time? If so, then you've proved P≠NP. Either way, a very nice thing to have. We haven't been able to find polynomial time algorithms to check whether an expression is equal to $0$. But we have no formal prof that such algorithms are not possible to find. So far, P vs NP remains in the realm of conjecture.