Possibly false theorem in a book. Quine's theorem proof. A.k.a. Blake's method for reducing boolean functions.

142 Views Asked by At

This is what I found in one text on discrete mathematics:

Theorem: given a full DNF of a boolean function, if we first simplify it using all identities of the type 1: $(x\land y)\lor (\bar{x}\land y) = y$ and $(x\lor y)\land (\bar{x}\lor y) = y$, and then apply all absorptions $(x\land y)\lor y = y$ and $(x\lor y)\land y = y$, we get a reduced DNF, namely the disjunction of all its prime implicates.

The text contained a "proof", but it was way too handwavy and actually similar to what I`ll describe below.

Here is my intuition of the proof: if we multiply a prime implicant by the expression of the type $(x\lor \bar{x})$, we can turn it into a disjunction of minterms. This way we can get a full DNF. Since then by applying to our full DNF operations of the type 1, which are inverse operations to multiplying by $(x\lor \bar{x})$, we should get all prime implicants. But there also will be some additional unwanted terms, that our absorbtions should take care of.

My question is how can I justify all the above steps and turn it into a complete rigorous proof?

Actually, I don`t even think the theorem as stated above is true at all. We probably need a more advanced identity - $$(a\land c)\lor (b\land \bar{c})=(a\land c)\lor (b\land \bar{c})\lor ab$$ for the first step instead of those of type 1 described in the theorem.