Let $L$ be a first order language and let $S$ be a set of $L$-sentences. I have seen two definitions of what it means for $S$ to be complete:
- $S$ is complete iff for every $L$-sentence $A$, either $A\in S$ or $\neg A\in S$ (e.g. in Peter G. Hinman's Fundamentals of Mathematical Logic)
- $S$ is complete iff for every $L$-sentence $A$, either $S\vDash A$ or $S\vDash\neg A$ (e.g. in David marker's Model Theory: An Introduction).
Notion 1 seems to be more common, and it implies notion 2. I have two questions:
Is there a standard reason why 1 appears to be more common?
When would we want to work with a set of sentences in some language that verifies 2 but not 1? Is there an example you may want to mention as particularly interesting?
Edit: I can think of the following reason why we may prefer one over the other.
Lemma: A satisfiable, consistent set $S$ of $L$-sentences is complete iff any two structures satisfying $S$ are elementary equivalent.
This lemma, as stated is true if complete is defined as in 2. However, if complete is defined as in 1, then $S$ needs to be in addition, closed under $\vDash$, if I am not mistaken.
(I would add the requirement that $S$ is consistent.)
The reason 2) is more common is that it does not assume S to be closed under logical consequences. E.g. the set containing the sentences that axiomatize the dense linear orders is a complete theory with respect to 2) but not with respect to 1).