For each relation, the sentence is either true or false.
Is there a taxonomy of sentences in this regard? Many seem to converge to 0 or 1, are there some that converge to values in between?
For each relation, the sentence is either true or false.
Is there a taxonomy of sentences in this regard? Many seem to converge to 0 or 1, are there some that converge to values in between?
Copyright © 2021 JogjaFile Inc.
Your question is answered by the zero-one law for first-order logic in a finite relational language. I think the best reference for this is A Shorter Model Theory by Hodges (Theorem 6.4.6).
Let $L$ be a finite relational language, and write $\text{Str}_L(X)$ for the set of all $L$-structures with underlying set $X$. For a natural number $n$, let $[n] = \{0,\dots,n-1\}$. Then for any $L$-sentence $\varphi$, we can define the asympototic probability $P(\varphi)$ that $\varphi$ is satisfied: $$P(\varphi) = \lim_{n\to\infty} \frac{|\{A\in \text{Str}_L([n])\mid A\models \varphi\}|}{|\text{Str}_L([n])|}.$$ This is the limit as $n$ goes to $\infty$ of the percentage of $L$-structures with a fixed domain of size $n$ which satisfy $\varphi$.
Theorem: For any $L$-sentence $\varphi$, $P(\varphi) = 0 \text{ or }1$.
Even more remarkably, $P(\varphi) = 1$ if and only if $M_L\models \varphi$, where $M_L$ is the Fraïssé limit of the class of all finite $L$-structures (this is a particular countably infinite $L$-structure). That is, $P(\varphi) = 1$ if and only if $\varphi\in T_L = \text{Th}(M_L)$.
Now we can give an explicit (computable) axiomatization for $T_L$ (again, see Hodges Section 6.4), and a computably axiomatizable complete theory is decidable: there is an algorithm which determines, for every $L$-sentence $\varphi$, whether $P(\varphi) = 0\text{ or }1$. The simplest form of this algorithm just searches for proofs of $\varphi$ and $\lnot \varphi$ from the axioms of $T_L$. A quantifier elimination algorithm is probably a bit more efficient.
It's important to note that none of the above holds for languages with function symbols.
In the comments, you wrote:
It's true that we can replace function symbols by relation symbols naming their graphs and rewrite first-order sentences in a corresponding way. The problem with this translation is that it does not preserve the count in the definition of $P(\varphi)$. That is, if $L = \{f\}$, where $f$ is a binary function symbol, and $L' = \{R\}$ is the corresponding relational language, where $R$ is a ternary relation symbol, then $\text{Str}_{L'}$ also includes $L'$-structures in which $R$ is not the graph of a function - it's way bigger than $\text{Str}_{L}$.
And if we let $\varphi$ be the sentence $\forall x\forall y\exists^! z R(x,y,z)$ asserting that $R$ is the graph of a binary function $f(x,y) = z$, then we will have $P(\varphi) = 0$. That is, in almost all $L'$-structures, $R$ is not the graph of a binary function. So you can think of imposing the restriction that $R$ is the graph of a function as conditioning on a measure $0$ set.
For an explicit example of the failure of the zero-one law outside the relational context, you should think about Exercise 8 from Section 6.4 in Hodges. Let $L = \{f\}$, where $f$ is a unary function symbol, and let $\varphi$ be the sentence $\forall x\, f(x)\neq x$. Then $$P(\varphi) = \frac{1}{e}.$$