How can I understand the (no) independence property in this very simple setting of first order logic?

64 Views Asked by At

My understanding of logic is limited to first order logic without functions with finite set of domain constants, and with herbrand semantics. Now in this setting, I would like to understand the independence property of logical theories --- My goal is to understand VC dimension in this setting.

So let us say I have a FOL language with only the predicate $R$, and domain constants $n$, represented by $[n] = \{1,...,n\}$. Does this language have the independence property?

In my understanding of independence property thus far, this language will have the independence property, if we can write a formula for picking out any subset of the directed graphs on $[n]$. Is this correct?

Furthermore, if I do not put any restrictions, then I can always axiomatize any graph on $n$ in this language, by simply writing out the graph edges in this language.

However, if I restrict that I only allow formulas with no constants, then I do not have independence property, as I can not distinguish between isomorphic graphs. Is this correct?

Finally, How can I compute the VC dimension of a theory/formula?

I am sorry if my question is all over the place, but I am finding it hard to get a rigorous understanding of independence property in herbrand logic.