Definitions of the usual order in $\mathbb{N}$

416 Views Asked by At

I know of basically two ways of defining the usual order in $\mathbb{N}$:

  1. By using the relation "$\in$" on $\mathbb{N}$ so that $\forall m,n\in N(m<n\longleftrightarrow m\in n)$.
  2. By saying that $\forall m, n \in \mathbb{N}(m<n\longleftrightarrow \exists p\in \mathbb{N}(n=p+m))$.

The first one seems to be more "artificial" and "elemental" in the sense that is more difficult to prove that it is indeed a linear order on $\mathbb{N}$. But such a definition is the one prefered by set theory books. So my question is why? since the second one is equivalent except that is much simpler and intuitive, at least I see it that way. Also what are the main advantages of using one over the other?, unless it's just a matter of taste.

Edit: To make precise my question I'll add some extra context. First of all I'm starting the definition of $\mathbb{N}$ just as is made in set theory. It might be by taking the intersection of all inductive sets or else it might be by saying that a natural number is a transitive set with the relation $\in$ on it being a strict linear order...,etc. The thing is essentially that I don't understand the fact of doing the defintion $(1)$ which is by far more elementary than just starting by proving the theorem of recursion and then making the definition of $+$ to get $(2)$, which, as I said, is simpler and more natural and intuitive. So I can't see the advantages of $(1)$ over $(2)$ and why set theory books prefer one over the other (This makes me think that I'm missing something).

Also my observation is that having $(1)$ it's very easy to deduce $(2)$ by just defining $+$ by using the recursion theorem and then applying the principle of induction. At this point I don't know if its possible to deduce $(1)$ by having $(2)$. So I'm in doubt here because I'm assuming that both definition are equivalent, thought I don't now and if they aren't then there should be a reason why $(1)$ is the appropriate definition. But if they're equivalent, then I'd like to know the advantages or the reasons why $(1)$ is preferible over the other.

3

There are 3 best solutions below

0
On BEST ANSWER

Since your question seems to be about set theory books and the set-theoretic construction of $\mathbb N$, I'll confine this answer to that context, except for saying here that I agree with the previous answers that some other contexts would call for other definitions.

In the context of set theory, Definition 1, which says that $<$ is identified with $\in$, the already available primitive predicate of set theory, is far simpler than Definition 2, which would require first introducing addition, presumably by converting the natural recursive definition to an explicit one. I believe this simplicity is the main reason for preferring Definition 1. Why make $<$ look more complicated than it is, especially since, if you adopt the complicated Definition 2, you'll eventually want to prove that $<$ coincides with $\in$ on natural numbers?

A secondary reason to prefer Definition 1 is that it generalizes immediately to ordinal numbers (if these are represented by sets in the standard way proposed by von Neumann). Definition 2 is harder to generalize because addition of transfinite ordinal numbers lacks some of the nice algebraic properties of addition of natural numbers, in particular commutativity. In fact, your Definition 2, generalized to refer to ordinal rather than natural numbers $m$, $n$, and $p$, would not correctly define $<$, though it would become correct if $p+m$ were changed to $m+p$. (I assume that $p$ is intended to be non-zero.)

0
On

With $\mathbb N$ given as set "only", a proof of 1. is very simple (show that $\mathbb N$ and its elements are ordinals or define $\mathbb N$ right away as smallest infinite ordinal). You need addition for the second method, which is in contrast less simple to introduce and handle in the set-theoretical context. When introducing $\mathbb N$ arithmetically, you need not care for the "atomic" internal structure of natural numbers, making 1. unfeasible, but have addition readily at hand. Then the main point of 2. is "just" associativity of addition and that $0$ is not a sum of other naturals. So its rather a matter of what method comes in handy with whatever notion of $\mathbb N$ you have at that moment.

0
On

The question is how do you define $\Bbb N$. Is this a particular set in a universe of $\sf ZFC$, or is it a model of certain axioms in a certain language?

In the first case, we often define $\omega$ the set of finite von Neumann ordinals to act as a surrogate for $\Bbb N$. In that case $m<n\leftrightarrow m\in n$. But in the language of set theory we only have $\in$ to work with, so if we want to talk about $<$ or $+$ or other symbols we need to write definitions for them first. That can be done, of course, but it is much more natural and simple to just define $<$ by using $\in$.

In the second case, we don't really care about set theory. We have a set which is called $\Bbb N$ and we have some constants, operations and relations in our languages. It is perfectly reasonable, for example, to require that $<$ is in our language in which case there is no need to define it at all. It was given to us with the rest of the interpretation.

But it is also possible that the language contains $+$ but not $<$, in which case we say that $m<n\leftrightarrow\exists k(m+k=n\land m\neq n)$.


The point is that the definition of $<$ on the natural numbers depends on how you treat the natural numbers, what is the language and what are the tools available for you. Definitions don't pop out of nowhere, they are statements in a language, and they have to be written in that language.