I often hear people saying things like:
- one only really understands groups if one looks at group homomorphisms between them
- one only really understands rings if one looks at ring homomorphisms between them
- ...
Of course, these statements are just special cases the category theoretic slogan that what really counts is the morphisms not the objects. I can appreciate that it's quite cool that one can characterize constructions such as the free group or the direct product of groups just in terms of their relation to other groups (and in this sense, the morphisms from and to that construction help to understand the construction better). But besides, I'm struggling to appreciate the usefulness of homomorphisms. I understand that what one is interested in is groups up to isomorphism (one wants to classify groups), so the notion of isomorphism seems to me to be very fundamental, but the notion of a homomorphism seems to me in some sense just to be a precursor the fundamental notion of an isomorphism.
I guess it would help if some of you could point me to bits and pieces of group theory where homomorphisms (instead of isomorphisms) are essential. In which sense do group homomorphisms help us to understand groups itself better?
Of course, I could ask the same question about ring theory or some other subfield of mathematics. If you have answers why morphisms matter in these fields, then feel free to tell me! After all, what I'm interested in is examples of the usefulness of homomorphisms from down to earth concrete mathematics, so what I don't want is just category theoretic philosophy jabbering (this is not to say I don't like category theory, but for the purpose of this question I'm interested in why morphisms matter in specific subfields of mathematics such as group theory).
Here is a logic-based viewpoint on the use of isomorphisms and homomorphisms. Every first-order structure (e.g. group, ring, field, module, ...) has an associated (complete) theory, namely the set of all sentences in its language that are true for it. For example, each group satisfies the group axioms. Some groups $(G,·)$ satisfy "$∀x,y\ ( x·y = y·x )$" (i.e. $(G,·)$ is abelian) while others do not. But any isomorphism between two structures $M,N$ immediately tells you that their theory is identical. Furthermore, if there is any homomorphism from $M$ onto $N$, then every positive sentence (i.e. a sentence constructed using only $∀,∃,∧,∨,=$, meaning no negation or implication) that is true for $M$ is also true for $N$. For instance, a group being abelian is a positive sentence, giving Lee Mosher's example of proving a group nonabelian via a homomorphism onto a nonabelian group.
But in fact this idea is much more widely applicable than it may seem at first! For instance, the proof that the 15 puzzle in its solved state but with any two numbers swapped cannot be solved is based on the invariant parity of the permutation of all 16 squares plus the distance of the empty square from its desired final location. The parity of a permutation in $S_n$ is just a homomorphism from $S_n$ into $\mathbb{Z}/2\mathbb{Z}$, and this invariant is very useful in many results not just in combinatorics but also in linear algebra (such as Leibniz's determinant formula).
Just to make clear how the idea shows up in invariants, suppose we have a puzzle and want to prove that no sequence of moves can lead to a certain state. Then we can consider the structure $M$ of states with a function-symbol for each possible move. Then the claim that a sequences of moves is a solution can be expressed as an equation of the form "$y = f_1(f_2(\cdots f_k(x)\cdots))$". An invariant $i$ is a homomorphism on $M$. In some cases, we can find such an $i$ where $i(f_k(x)) = i(x)$ for every state $x$, which gives "$i(y) = i(x)$". But we may in general want to reason about the equivalence classes of states according to $i$. For instance, many permutation puzzles have parities, which need to be fixed appropriately before commutators can be used to solve them.
Another example is the winding of a continuous path that avoid the origin around the origin. Let $A$ be the set of continuous paths that do not pass through the origin. Let $s$ be a ternary relation on $A$ such that $s(P,Q,R)$ iff $P$ ends at where $Q$ starts and $R$ is the result of joining $P$ to $Q$. There is a homomorphism $w$ from $(A,s)$ into $\mathbb{R}$ with the addition relation, such that the $w(C)∈\mathbb{Z}$ for any closed path $C∈A$. Winding is used in one proof of the 2d intermediate value theorem.
Furthermore, homomorphisms are useful in constructing new structures. For example, a field $F$ can be extended by adjoining a root of an irreducible polynomial $p$ over $F$, but showing this does use the homomorphism $j$ from $F[X]$ to $F[X]/(p·F[X])$ to get $p(j(X)) = j(p(X)) = j(0)$. For yet another example, the construction of the reals via Cauchy sequences of rationals arguably requires the notion of partitioning them into classes where in each class any two have pointwise difference going to zero, and effectively we are proving that there is a homomorphism on Cauchy sequences of rationals whose kernel is the set of sequences that go to zero. Sounds familiar (first isomorphism theorem)?
If we look at other algebraic structures, we also have the determinant of square matrices, which is a homomorphism from the matrix ring into the underlying ring, and this is very useful in many proofs. Each module is essentially a ring of homomorphisms on an abelian ring. In geometry, it can be useful to use projection from 3d to 2d, such as in the proof of Desargue's theorem. Here the projection is a homomorphism that respects collinearity.
In a broad sense, a nontrivial homomorphism reduces a structure to a simpler one while respecting some operations and properties, and in doing so may reveal key features of the original structure or allow transferring knowledge about the initial structure to knowledge about the image.