The question comes from Mehryar Mohri's Foundations of Machine Learning. In Example 2.5 the book transform a $k$-CNF formula to conjunctions of boolean literals, but I can't understand the trick in the book.
Here is the definition of $k$-CNF formula:
A conjunctive normal form(CNF) formula is a conjunction of disjunctions. A $k$-CNF formula is an expression of the form $T_1\land \ldots \land T_j$ with arbitrary length $j\in\mathbb{N}$ and with each term $T_i$ being a disjunction of at most $k$ Boolean attributes
and here is the trick:
To do so, it suffices to associate to each $T_i$ a new variable. Then this can be done with the following bijection: $$ a_i(x_1)\lor\ldots\lor a_i(x_n)\to Y_{a_i(x_1),\ldots,a_i(x_n)} $$ where $a_i(x_j)$ denotes the assignment to $x_j$ in term $T_i$.
But how could this mapping be a bijection when $Y_{a_i(x_1),\ldots,a_i(x_n)}$ only has value 0 or 1? It'll be great if someone could give a detailed example showing how these things work. thx.
The simbology is not so clear to me, but I think that, basically, we have that the conjunct $T_i$ in the $k$-CNF is a disjunction of literals : i.e. boolean vars or negated boolean vars.
I assume that the variables $x_j$ are the boolena variables; if so, they occur into $T_i$ as is or negated.
We introduce an "auxiliary" function $a_i$ associated to $T_i$ such that :
If so, the new boolean variable $Y_{a_i(x_1), \ldots, a_i(x_n)}$ will "works" exactly as the original disjunction.