So I'm reading Leary's A friendly introduction to mathematical logic (both first and second edition follow this approach) and the way they have built the term model is slightly different than what I've seen elsewhere. He proceeds as following:
- If $\Sigma$ is a set of consistent $\mathcal{L}$ - sentences, then create $\mathcal{L}_1$ by adding countable infinite new henking constants $c_i$.
- Extend $\Sigma$ to $\Sigma_1$ by setting $\Sigma_1 = \Sigma \cup \left\{ \left[ \exists x \theta \right] \implies \theta^x_c \mid \left[ \exists x \theta \right] \text{ is an $\mathcal{L}$ sentence} \right\} $
- Expand the $\mathcal{L}_1$ sentences again by adding new henking constants $k_i$. Extend $\Sigma_1$ in a similar fashion as in step 2. Go on ad infinitum.
- Set the new set of sentences as: $\hat{\Sigma} = \cup _{i<\infty} \Sigma_i $
He then proceeds as usual, constructing the term model based on equivalence classes of $(t_1 = t_2) \in \hat{\Sigma}$, and shows that it is indeed a model of $\Sigma$. Is this necessary? Or maybe has some added benefits? Is it not enough to stay at $\Sigma_1$- sentences?
There is a slight issue with trying to do it all at once - you have to be careful how you index your constants.
Keeping everything countable for simplicity, we're looking at the language $\mathcal{L}_1$ gotten from $\mathcal{L}$ by adding infinitely many new constant symbols $c_i$ ($i\in\mathbb{N}$). We now fix some indexing $\theta_i(x)$ of $\mathcal{L}_1$-formulas with one free variable, and add to $T$ the axiom $$\varphi_i: \exists x\theta_i(x)\iff\theta_i(c_i)$$ for each $i\in\mathbb{N}$.
But this need not be consistent: suppose for example that $\theta_i(x)$ is the sentence $x\not=c_i$. This could totally happen, based on only what we've said so far!
This is fixed by being a bit careful about how we index things - and this is exactly the trick Enderton employs (per spaceisdarkgreen's comment "Enderton does a trick to do it all at once, involving adding a Henkin sentence for each formula-variable pair in an orderly fashion where each constant hasn’t appeared yet"). Meanwhile the "many-step" approach kills the issue entirely since the constants introduced at level $n$ are linked only to formulas introduced at previous levels. And thinking about this you can see how Enderton's approach really is a refinement of the multi-step one: for Enderton, each "level" is just one formula-constant pair, and his bookkeeping corresponds to the "coarse bookkeeping" of the usual multi-step level.
Which is preferable? Well, that's going to be a matter of taste. I certainly don't know of any mathematical issue for which one is better than the other. To me the main distinction is that Enderton's approach has an undeniable parsimony to it while the multi-step approach is easier (in my experience) to learn.