Does cardinality really have something to do with the number of elements in a infinite set?

1.4k Views Asked by At

I've seen some videos and read some texts (non-rigorous ones) that explained the concept of cardinality, and sometimes I see someone asking if there are more numbers between the reals in $[0,1]$ then numbers in the set of natural numbers. I've read about uncountability in several places and it seems that:

  • The interpretation that there are more numbers is vague and inaccurate;

  • That the real point is about the diagonalization of the members of a set, if such a task is possible or no;

  • And that the idea of having more numbers is a kind of simplification that they give to the laymen;

So does cardinality really have connections with the notion of quantity of elements of a set or not? I know (I guess) that the cardinality of a finite set is the counting of the number of elements on it and perhaps this notion has leaked somehow to the notion of cardinality on infinite sets which seems to be a very different idea.

I feel that the idea of the quantity of elements of an infinite set is weird per se, when I read about the diagonalization, it made a lot more sense than the idea of quantity of an infinite set.

4

There are 4 best solutions below

0
On BEST ANSWER

What is a number? It is an informal notion of a measurement of size. This size can be discrete, like the integers, or a ratio, or length (like the real numbers) and so on.

Cardinal numbers, and the notion of cardinality, can be seen as a very good notion for the size of sets.

One can talk about other ways of describing the size of an infinite set. But cardinality is a very good notion because it doesn't require additional structure to be put on the set. For example, it's very easy to see how to define a bijection between $\Bbb N$ and $\Bbb Z$, but as ordered sets these are nothing alike. Cardinality allows us to discard that structure.

Once accepting this as a reasonable notion for the size of a set, we can now say that the number of elements a set has is its cardinality.


Related threads:

  1. Is there a way to define the "size" of an infinite set that takes into account "intuitive" differences between sets?
  2. Cardinality != Density?
  3. Comparing the sizes of countable infinite sets
  4. Why the principle of counting does not match with our common sense
  5. Why do the rationals, integers and naturals all have the same cardinality?
1
On

The "number of elements of an infinite set" does not mean anything unless and until you choose to define a meaning for it.

So in this sense you're right: the cardinality of an infinite set does not have anything to do with the number of elements in it -- for the trivial reason that "the number of elements in it" is meaningless!

Cardinality is, however, an interesting and useful concept in its own right -- for example, to prove that transcendental real numbers exist, it is much easier to use a cardinality argument than it is to show that a particular number is transcendental, and such transfinite counting arguments are useful in many places.

It should also be clear that cardinality generalizes the usual notion of "how many elements" from finite sets, in the sense that it coincides with it when the sets are finite, and the definition does not contain anything that's obviously a trick to sneak in a different behavior for infinite sets.

Since there seems to be no other proposed generalization of "how many elements in this set" that looks like it is as useful as cardinality (this is not a deductive fact, just the practical experience that nobody has seemed to propose one, at least not while convincing very many mathematicians that his proposal was useful), it has become customary to speak about cardinality with the usual wordings of "how many" and "fewer" and "more", etc. But this is at its root merely a practical convention -- it does not purport to rest on any objective Platonic concept of how-many-ness that necessarily generalizes to infinite sets in this and only this way.


Strictly speaking, measure theory and various topological do offer competing notions of "more points" and "fewer points" that are both useful and different from cardinality. They just haven't won the practical battle for the meaning of the well-known phrasings from the finite case -- possibly because these notions requires that the sets they're used about come with more structure than just "being an infinite set", which is enough to speak about cardinality.

2
On

Cardinality is one way to measure the size of a set, and depending on what you are interested in, it can be a very good measure. In particular, if you are interested in the null structre (and assume the Axiom of Choice) we arrive at a very natural concept of size: any two sets can be compared, and "is bigger" is a monotone and transitive relation. Furthermore this any two "isomorphic" structures will have the same cardinality.

There are other (abstract) notions of measuring the size of sets that can be used.

  • The Lebesgue measure (on the reals and associated sets) is one common example. In this sense there are also more numbers in $[0,1]$ than there are natural numbers.
  • We can talk about the (asymptotic upper) density of sets of natural numbers: $$\limsup_{n \rightarrow \infty} \frac{A \cap \{ 0 , \ldots , n-1 \}}{n}$$ and this will give a decidedly different measure on the size of sets of naturals.
  • More generally we can pick a filter of subsets of some fixed set $X$, and then call a set "large" if it belongs to this filter, and "small" if its complement does. (In this case it is usually better to begin with a nonprincipal filter.) This sort of measuring doesn't usually result in any quantitative outputs.

But the use of comparatives such as "more" or "less" in the case of cardinality betrays the fact that mathematics (even set theory!) is done by humans, and humans have a tendency to make analogies to the real world. We have an intuitive sense of what more means, and this intuitive sense is not contradicted by the notion of cardinality. Even more, when restricted to the "finite" case, this notion completely matches up to our intuitions. While it is often dangerous to extend the more familiar finite into the infinite, in the case of analogies that can be rigourously defended, I will make an exception.

0
On

The set-theoretic perspective has been well represented in other answers, but there is another perspective that has not been mentioned yet.

Although it is not emphasized in set theory contexts, the use of bijections to measure cardinality is closely related to "Hume's principle", which is the claim that if two collections $A$ and $B$ of objects are equinumerous, then there is a bijection between them. In this setting, "equinumerous" means that there is a relation $R$ such that every $a \in A$ is related to exactly one bijection in $B$, and every $b \in B$ is related to exactly one object of $A$. For a careful treatment of this background, see Frege's Theorem for Foundations of Arithmetic by Edward N. Zalta in the Stanford Encyclopedia of Philosophy.

The distinction between cardinality and equinumerousity may seem to be only a trivial distinction, but that appearance is only because we are used to working in set theory with the full axiom of separation and the axiom of choice. When we work in other settings, Hume's principle becomes much less straightforward.

It could equally well be argued that the conception of cardinality taught in ordinary set theory is the relational one (the one that, by Hume's principle, is equivalent to the bijectional definition of cardinality). Because these are so tightly intertwined in ZFC, where relations are reduced to just another kind of set, it is hard to learn one of these concepts of size without learning the other.

One way that Hume's principle can fail is when we take "relations" to mean "definable relations" and we limit the set existence (comprehension/replacement) scheme. This happens in settings including Kripke-Platek set theory and systems of higher-order arithmetic. In those settings, there is no reason to think that just because a formula $\phi(x,y)$ defines a one-to-one correspondence between two sets in a model, that there must actually be a function in the model that witnesses the correspondence.