Why is it the (group) morphisms that matter?
I often hear people saying things like:
- one only really understands groups if one looks at group homomorphisms between them
- one only really understands rings if one looks at ring homomorphisms between them
- ...
Of course, these statements are just special cases the category theoretic slogan that what really counts is the morphisms not the objects. I can appreciate that it's quite cool that one can characterize constructions such as the free group or the direct product of groups just in terms of their relation to other groups (and in this sense, the morphisms from and to that construction help to understand the construction better). But besides, I'm struggling to appreciate the usefulness of homomorphisms. I understand that what one is interested in is groups up to isomorphism (one wants to classify groups), so the notion of isomorphism seems to me to be very fundamental, but the notion of a homomorphism seems to me in some sense just to be a precursor the fundamental notion of an isomorphism.
I guess it would help if some of you could point me to bits and pieces of group theory where homomorphisms (instead of isomorphisms) are essential. In which sense do group homomorphisms help us to understand groups itself better?
Of course, I could ask the same question about ring theory or some other subfield of mathematics. If you have answers why morphisms matter in these fields, then feel free to tell me! After all, what I'm interested in is examples of the usefulness of homomorphisms from down to earth concrete mathematics, so what I don't want is just category theoretic philosophy jabbering (this is not to say I don't like category theory, but for the purpose of this question I'm interested in why morphisms matter in specific subfields of mathematics such as group theory).
Even if all you wish to do is to classify groups up to isomorphism, then there is a very important collection of isomorphism invariants of a group $G$, as follows: given another group $H$, does there exist a surjective homomorphism $G \mapsto H$?
As a special case, I'm sure you would agree that being abelian is an important isomorphism invariant. One very good way to prove that a group $G$ is not abelian is to prove that it has a homomorphism onto a nonabelian group. Many knot groups are proved to be nonabelian in exactly this manner.
As another special case, the set of homomorphisms from $G$ to the group $\mathbb Z$ has the structure of an abelian group (addition of any two such homomrophisms gives another one; and any two such homomorphisms commute), this abelian group is called the first cohomology of $G$ with $\mathbb Z$ coefficients, and is denoted $H^1(G;\mathbb Z)$. If $G$ is finitely generated, then $H^1(G;\mathbb Z)$ is also finitely generated, and therefore you can apply the classification theorem of finitely generated abelian groups to $H^1(G;\mathbb Z)$. Any abelian group isomorphism invariants applied to $H^1(G;\mathbb Z)$ are (ordinary) group isomorphism invariants of $G$. For example, the rank of the abelian group $H^1(G;\mathbb Z)$, which is the largest $n$ such that $\mathbb Z^n$ is isomorphic to a subgroup of $H^1(G;\mathbb Z)$, is a group isomorphism invariant of $G$; this number $n$ can be described as the largest number of "linearly independent" surjective homomorphisms $G \mapsto \mathbb Z$.
I could go on and on, but here's the general point: Anything you can "do" with a group $G$ that uses only the group structure on $G$ can be turned into an isomorphism invariant of $G$. In particular, properties of homomorphisms from (or to) $G$, and of the ranges (or domains) of those homomorphisms, can be turned into isomorphism invariants of $G$. Very useful!
Here is a logic-based viewpoint on the use of isomorphisms and homomorphisms. Every first-order structure (e.g. group, ring, field, module, ...) has an associated (complete) theory, namely the set of all sentences in its language that are true for it. For example, each group satisfies the group axioms. Some groups $(G,·)$ satisfy "$∀x,y\ ( x·y = y·x )$" (i.e. $(G,·)$ is abelian) while others do not. But any isomorphism between two structures $M,N$ immediately tells you that their theory is identical. Furthermore, if there is any homomorphism from $M$ onto $N$, then every positive sentence (i.e. a sentence constructed using only $∀,∃,∧,∨,=$, meaning no negation or implication) that is true for $M$ is also true for $N$. For instance, a group being abelian is a positive sentence, giving Lee Mosher's example of proving a group nonabelian via a homomorphism onto a nonabelian group.
But in fact this idea is much more widely applicable than it may seem at first! For instance, the proof that the 15 puzzle in its solved state but with any two numbers swapped cannot be solved is based on the invariant parity of the permutation of all 16 squares plus the distance of the empty square from its desired final location. The parity of a permutation in $S_n$ is just a homomorphism from $S_n$ into $\mathbb{Z}/2\mathbb{Z}$, and this invariant is very useful in many results not just in combinatorics but also in linear algebra (such as Leibniz's determinant formula).
Just to make clear how the idea shows up in invariants, suppose we have a puzzle and want to prove that no sequence of moves can lead to a certain state. Then we can consider the structure $M$ of states with a function-symbol for each possible move. Then the claim that a sequences of moves is a solution can be expressed as an equation of the form "$y = f_1(f_2(\cdots f_k(x)\cdots))$". An invariant $i$ is a homomorphism on $M$. In some cases, we can find such an $i$ where $i(f_k(x)) = i(x)$ for every state $x$, which gives "$i(y) = i(x)$". But we may in general want to reason about the equivalence classes of states according to $i$. For instance, many permutation puzzles have parities, which need to be fixed appropriately before commutators can be used to solve them.
Another example is the winding of a continuous path that avoid the origin around the origin. Let $A$ be the set of continuous paths that do not pass through the origin. Let $s$ be a ternary relation on $A$ such that $s(P,Q,R)$ iff $P$ ends at where $Q$ starts and $R$ is the result of joining $P$ to $Q$. There is a homomorphism $w$ from $(A,s)$ into $\mathbb{R}$ with the addition relation, such that the $w(C)∈\mathbb{Z}$ for any closed path $C∈A$. Winding is used in one proof of the 2d intermediate value theorem.
Furthermore, homomorphisms are useful in constructing new structures. For example, a field $F$ can be extended by adjoining a root of an irreducible polynomial $p$ over $F$, but showing this does use the homomorphism $j$ from $F[X]$ to $F[X]/(p·F[X])$ to get $p(j(X)) = j(p(X)) = j(0)$. For yet another example, the construction of the reals via Cauchy sequences of rationals arguably requires the notion of partitioning them into classes where in each class any two have pointwise difference going to zero, and effectively we are proving that there is a homomorphism on Cauchy sequences of rationals whose kernel is the set of sequences that go to zero. Sounds familiar (first isomorphism theorem)?
If we look at other algebraic structures, we also have the determinant of square matrices, which is a homomorphism from the matrix ring into the underlying ring, and this is very useful in many proofs. Each module is essentially a ring of homomorphisms on an abelian ring. In geometry, it can be useful to use projection from 3d to 2d, such as in the proof of Desargue's theorem. Here the projection is a homomorphism that respects collinearity.
In a broad sense, a nontrivial homomorphism reduces a structure to a simpler one while respecting some operations and properties, and in doing so may reveal key features of the original structure or allow transferring knowledge about the initial structure to knowledge about the image.
Free monoid morphisms are studied in their own right in computer science, because they can be used to mimick Turing machines. This leads to the famous, easy-to-state decision problem called "Post's correspondence problem".
Let $g, h: \Sigma^*\rightarrow\Delta^*$ be two free monoid homomorphisms. The equaliser of $g$ and $h$ is the set of points where they agree, so the set $\operatorname{Eq}(g, h):=\{x\in\Sigma^*\mid g(x)=h(x)\}$. In 1946, Post encoded Turing machines into monoid morphisms and, via the halting problem, proved the following:
Theorem. It is undecidable in general whether $\operatorname{Eq}(g, h)$ is trivial or not.
The underlying decision problem is called Post's correspondence problem, and is a relatively standard topic for computer science students to learn about. Because it is so easy to state (compared to the halting problem, or even to the word problem for your favourite objects), it is often used in proofs of undecidability, e.g. undecidability of the matrix mortality problem. For concrete applications, see T. Harju and J. Karhumäki. "Morphisms." Handbook of formal languages. Springer, Berlin, Heidelberg, 1997. 439-510.
Lets end with an open problem. The decidability of Post's correspondence problem is dependent on the size of $\Sigma$. For example, it is clearly decidable if $|\Sigma|=1$, while it is a theorem that it is decidable for $|\Sigma|=2$. In 2015, it was shown by Neary (doi) to be undecidable for $|\Sigma|=5$.
Problem. Is Post's correspondence problem decidable for $|\Sigma|=3$, and for $|\Sigma|=4$?