I am working through A Friendly Introduction to Mathematical Logic and am a little confused on a discrepancy between the book's proof of the completeness theorem for first order logic and other proofs I've seen.
The textbook's proof uses Henkin constants and axioms for the proof. Given a language $\mathcal{L}$ we add a countable set of Henkin constants of the form $c_i$. Then, we enumerate every formula of the form $\exists x \theta$ and add our Henkin axioms to our consistent (by assumption) set of sentences $\Sigma$. This is where every other proof I've seen stops in the expansion of the language and of $\Sigma$. They then continue to create a maximally consistent set of sentences etc. The texbook, however, continues expanding the language, adding set after set of Henkin Constants and Henkin axioms. For example, the first expansion of $\Sigma$ looked as follows:
$H_1=$ {$\exists x {\theta}_i \rightarrow \theta(c_i) \vert (\exists x \theta_i)$ is an $\mathcal{L}$ sentence}
The second expansion (where $\mathcal{L}_1$ is $\mathcal{L} \cup ${$c_i$}) and we've just added a new set of Henkin constants {$k_i$} would be as follows:
$H_2=$ {$\exists x {\theta}_i \rightarrow \theta(k_i) \vert (\exists x \theta_i)$ is an $\mathcal{L}_1$ sentence}
This process continues infinitely. After some thought, I felt this made intuitive sense as I thought each expansion of $\mathcal{L}$ and $\Sigma$ would let us nest existential quantifiers (and therefore enable us to justify $n$-ary relations in our model). Now, however, I'm not sure if I was on the right track. Any help would be much appreciated – thanks!