More natural ways to view things that are usually described as equivalence classes

187 Views Asked by At

A graph is usually defined as a set together with a relation on it. But when I think of some concrete "graph", say, the complete "graph" on three vertices, I don't think in such terms: I just see a triangle and I don't have any names for the vertices, they are indistinguishable for me. Information would have to be added for the vertices to be labeled. If one wants to study such objects formally, one would usually define them as equivalence classes of graphs on some set of vertices under isomorphism. But even if we restrict the set of vertices to be the natural numbers up to the size of the graph, this construction yields surprisingly "large" objects - sets with many complicated elements.

As in the case of necklaces and Lyndon words, one could pick pick some special object from the equivalence class - I'm not aware of any standard way to do this for graph isomorphism classes, but we could define some textual representation of graphs, like a list pairs of vertices connected by edges given in decimal, and also take the lexicographically smallest representation that gives an element of some class. Ultimately, we could define a bijection between these classes and natural numbers, and say that what was seen as a class is really just a natural number (this approach would also generalize to infinite graphs by bijecting with a larger set). However, these methods don't seem to "get to the heart of the matter". Perhaps the problem is that to "do anything" computationally with graphs represented in these ways, say, add an edge, or even to define, say, a minor, one would basically have to convert them to some other form first. Also, the choice of the special object feels rather arbitrary, at least the ones I mentioned for graphs here.

Thinking about this, I noticed that there's a special kind of object that doesn't have these issues, at least in set theory - rooted trees where each node's children are all unique. These objects can be represented literally as sets, but one could imagine that in a different formal system, like "set theory but each set can be created from others in two versions: red or blue", they would have to be more complicated. Perhaps there is a system where more kinds of objects can be represented so simply?

One could of course say that all that matters is that there's some "implementation" of needed concepts, and after all it's well known that mathematicians don't care about ugly source code ;). Perhaps it's not the most important thing in life, but if I had the choice I would prefer if everything was nice and pretty all the way down. Or maybe my whole intuition is wrong and for example graph isomorphism classes are really a secondary concept to graphs?

Edit: For at least one type of object that isn't sets, one can still find a nice representation in set theory: words up to permutation of the alphabet can be partitions. Are there others?

2

There are 2 best solutions below

0
On

I’m not sure that I correctly understood the issues for your question, so my answer can be weakly relevant to it. But I hope it can be useful for you.

I think when we say about a vision of a equivalence class of isomorphic objects, we mean structure. It can be viewed as a basic concept in mathematics, which is a family of relations on a set (and possibly on a family of its subsets and so forth) satisfying given properties. Nicolas Bourbaki in their paper [Bou] proposed a program to systemize worlds of mathematical objects based on this concept. The organizing principle is hierarchy of structures, going from the simple to the complex, from the general to the particular. This direction is backwards to historical development of mathematics. I think mathematical objects, ideas initially were properties of objects of our life experience, for instance, of ten sticks or of a round plate. Later these properties were abstracted from the objects and idealized (for instance, notions of number ten or of a disk) and then generalized (for instance, to a notion of a natural number) [Ale].

As a working mathematician, usually I deal with concrete models. Bourbaki agrees that “the mathematician does not work like a machine, nor as the workingman on a moving belt; we can not over-emphasize the fundamental role played in his research by a special intuition, which is not the popular sense-intuition, but rather a kind of direct divination (ahead of all reasoning) of the normal behavior, which he seems to have the right to expect of mathematical beings, with whom a long acquintance has made him as familiar as with the beings of the real world”. [Bou]

But when I need to validate my intuition, I have to use magic tricks like arguments dealing with equivalence classes and other formal stuff. They can be cumbersome and non-natural (for instance, as I remember, a full expression of the notion of $1$, given by Bourbaki, needs several thousands symbols). But this is a price for rigor.

References

[Ale] Aleksandr Aleksandrov, A general vision of mathematics, in “Mathematics: its contents, methods, and meaning”, vol. 1, eds.: A. D. Aleksandrov, A. N. Kolmogorov, M. A. Lavrent'ev, Publ. of Academy of Sciences of USSR, Moscow, 1956, in Russian ("Общий взгляд на математику"), 5–79.

[Bou] Nicolas Bourbaki, L'Architecture des mathematiques, in "Les grands courants de la pensée mathématique", F. La Lionnais (Cahiers du Sud, 1948, 35–47). Authorized English translation. Russian translation.

0
On

I'll add my thoughts on this below, but this is more or less just an elaboraton of @antkam-s comment.

Much (but certainly not all) of working in mathematics, with set theory as the foundation, can be fit into the following framework:

  1. Find some phenomena in the real world that you'd like to understand.
  2. Model it with sets in some natural way.
  3. Filter away the unnecessary details.

For example, if you want to understand the concept of cardinality, you just take sets as your objects and mod out by bijections. For understanding aspects of networks and some types of interactions, you take graphs and mod out by graph isomorphisms; symmetries $\rightarrow$ groups $\rightarrow$ group isomorphisms, space $\rightarrow$ topological spaces $\rightarrow$ homeomorphisms, and the list goes on.

Now, as you say, it seems that often the "filtering" step introduces a lot of complexity. You might think of remedying this either by using different models, or by building our theories onto a foundation different from set theory altogether.

But here is the thing: the complexity is not really inherent to the filtering step - it is inherent to the phenomena we are trying to model. Networks are complicated, symmetries are complicated, space is complicated. You can change things up so the complexity lies at a different part of the process of formalization, but you (usually) can't escape it.1

Given this harsh2 reality, there are good reasons to keep the initial representations (and our foundational theory) simple and to hide the complexity behind the filtering step. The foundation has to be reasonably expressive and easy to use, because we use it to model all sorts of things, and set theory seems to be quite successful in these respects (but I don't know much about current thoughts on foundations, so don't take my word). Keeping the initial representation simple helps tremendeously in the formal manipulation of our objects (edge addition, etc.), as well as making it easier to connect different objects and areas of mathematics. Slightly silly example, but if we only defined cardinality for sets of the form $\{1,\dots,n\}$, then the notion of cardinality would be much less useful in all other areas of mathematics, simply because it would apply in fewer cases. Similarly, if we only dealt with isomorphism classes of groups and graphs, then defining a Cayley graph would probably become quite difficult.

So, to summarize, mathematicians do care about "ugly source code". But when the problem is sufficiently complex, it seems that there is always going to be some "ugliness" involved - the question is where it appears. And in the end, just like with source code, beauty is less important per se than usability and modularity.

1 There's some joke hidden in here about the devil being in the details, but I can't find it..
2 Of course this is not really "harsh" reality - if the things we study were not complex, then we would quickly grow bored of studying them.

Edit: I should also add that there is a different philosophy which, to some extent, sidesteps these issues. Maybe the key to modeling phenomena is not to take objects as primitives, but rather the ways objects can interact with each other. This is the viewpoint of category theory, and it turns out to be a surprisingly strong idea in some areas of mathematics. In this case the "implementation" of the objects really does not matter. In fact, a key takeaway of category theory is that two models that, on the surface, seem very different can sometimes be just two "implementations" of the same phenomenon.