Where are the model theory concepts from?

399 Views Asked by At

Look at the following definition.

Definition. Let $\kappa$ be an infinite cardinal. A theory $T$ is called $\kappa$-stable if for all model $M\models T$ and all $A\subset M$ with $|A|\leq \kappa$ we have $|S_n^M(A)|\leq \kappa$. A theory $T$ is called stable if it is $\kappa$-stable for some infinite cardinal $\kappa$.

I am beginner in model theory and so my questions might be stupid. When you read a basic textbook in math you see that the algebraic , analysis, geometric, topological and .... definitions/concepts are natural. For example algebraic concepts, group, ring, field, module, Galois theory and ... have a clear natural roots. Also the notion of continuity in analysis is a natural (in my sense!) concept. The idea behind topology is also natural. But the model theoretic notions are usually not concrete for me at all! For example one of the main portions in model theory is stability theory (which is a part of Shelah classification) in which you need to count the number of types. I would like to know: where is the idea of stability (counting the number of types) come from?

Any reference would be appreciated.

3

There are 3 best solutions below

4
On

I recommend this survey of Chernikov as a source.

Stability, in my opinion, should be thought of in the context of the overall classification program$^1$ - in particular, the idea is to look for a "tameness" property which will hopefully imply that a given theory has "few" models (so that we have a hope of classifying them). That is, stability is (initially at least) a tool with an intended application. Remember its original appearance, after all: Morley introduced it to show that (for countable complete theories) categoricity in one uncountable cardinal implies categoricity in every uncountable cardinal, or more broadly that categoricity in a single uncountable cardinal is an incredibly powerful tameness property.

The key point, then, is to connect stability more generally with the number of models - or more simply, to understand why "more types = more models." I think a good first example to consider here is $\mathbb{C}$ versus $\mathbb{R}$ as fields. The former's theory is very simple: an algebraically closed field is classified completely by its characteristic and its transcendence degree, and in particular the theory of algebraically closed fields of characteristic zero is uncountably categorical. By contrast, the latter's theory is very complicated, at least in the sense of counting models: it's easy to show that there are for example continuum many non-isomorphic countable real closed fields. Playing around with this example, it's not hard to see that what's going on is that $\mathbb{C}$ has "few types" while $\mathbb{R}$ has "many types," and this gives us the idea to try to connect the number of types and the number of models more generally. (It also suggests more specifically a connection between instability and definable orderings, and indeed this turns out to hold in a very strong sense: see Definition 2.9 and following in the paper linked above.)


$^1$Of course, to a certain extent this just pushes the question back: why, or to what extent, is the classification program natural? In my opinion, the question of when a (first-order axiomatizable) class of structures admits a "reasonable classification" is an extremely natural one, motivated by examples on each side - e.g. uncountable dense linear orders without endpoints are extremely complicated even though their theory is very simple, while algebraically closed fields are easily classified by characteristic and transcendence degree - and the reflexive desire to find a "common thread" uniting the tame, or the wild, theories.

0
On

I'd like to argue that the counting types definition of stability is actually very "natural".

At the most basic level, model theory is about the semantics of first-order formulas, i.e. definable sets in models. Since first-order logic is built on classical propositional logic, there are Boolean algebras everywhere. In particular, if you fix a particular variable context $x_1,\dots,x_n$ and a subset $A$ of a model $M$, you get the Boolean algebra $B_n(A)$ of subsets of $M^n$ which are definable with parameters from $A$.

Now Stone duality associates to any Boolean algebra $B$ its Stone space of ultrafilters $S(B)$, and I take it as "natural" in the theory of Boolean algeabras to study $B$ in terms of $S(B)$, using tools from point-set topology. But forgetting about topology for now, there is an obvious, very coarse, invariant you can assign to $B$, namely the cardinality of $S(B)$.

If $B$ is the Boolean algebra $B_n(A)$, then $S(B)$ is the space $S_n(A)$ of complete types over $A$, and the invariant above is the number of complete types over $A$.

Reintroducing topology, another invariant that can be assigned to a Stone space is its Cantor-Bendixson rank. In fact, one is led "naturally" from counting types to the Cantor-Bendixson rank by the theorem that if $B$ is a countable Boolean algebra, then $|S(B)|<2^{\aleph_0}$ if and only if $|S(B)| = \aleph_0$ if and only if $S(B)$ is a scattered space, meaning that the Cantor-Bendixson process terminates at the empty set at some countable ordinal stage. And thinking about the Cantor-Bendixson rank for various Boolean algebras of definable sets gives rise to various ranks in stability theory (Morley rank, local ranks, etc.).

That's not to say that introducing counting types and ranks into model theory was an obvious thing to do - these things look "natural" with the benefit of hindsight. But the same could be said for any of the examples of "natural" definitions in your question, most of which took many years and many iterations to get exactly right.

On the other hand, Shelah and the other model theorists who worked on stability theory in its early years had many other great insights which seem much less "natural" to me, or at least more surprising, and which were crucial to the development of stability theory: the equivalence of instability with the order property, the notion of definability of types, the tool of forking independence, etc.

0
On

Just to add on one point to why the classification program is natural:

One can interpret the model theorist's approach to studying a structure $M$ as assigning to $M$ its ``logical invariances'' which is just a fancy way to refer to the theory $T$ of $M$.

It is therefore natural trying to understand how powerful these logical invariances are, that is, how far is $T$ from characterizing its models up to isomorphism completely.

It turn out by Löwenheim–Skolem theorem that the only case where $T$ has absolute power is absolutely boring: $T$ has a single finite model up to isomorphism. So the first non-boring situation where $T$ is powerful is when $T$ has few models in certain infinite cardinality. Stability is a slightly different expression of the idea that $T$ is powerful. Here $T$ has few types over a small parameter sets instead of few models. As Noah pointed out, these two notions of power are closely related.

The primary insight of Morley and later Shelah, in my opinion, is the following: When $T$ is powerful in a certain way, this is due to the fact that models of $T$ are equipped with some kind of special algebraic features (dimension, indepence relation,...) and do not encode complicated combinatorial patte

From this point of view, one can see the various definitions of stability as equating being powerful (having few types) with having algebraic features (local notion of dimension, independent relation with special properties) and with non encoding complicated combinatorial patterns (in this case an ordering). One can also think of Morley Theorem as a corollary of this phenomenon: These features persist through other cardinality which leads to the theory also has few models in other cardinality as well.