Can we extend the definition of a homomorphism to binary relations?

This is going to be quite a long post. The actual questions will be at the end of it in section "Questions."

INTRODUCTION

After receiving an answer to this question about extending the definition of a continuous function to binary relations, I started thinking about doing the same with homomorphisms in abstract algebra. It seems more difficult to me because there seems not to be a unique obvious way of doing it.

I tried a modest task first: to do this for semigroups (so for groups too in particular). Ring or algebra homomorphisms seem more difficult to generalize. I have ended up with two definitions for semigroup homomorphisms, but I'm not sure if they're of any use. Both fail to satisfy a property I thought they should satisfy. I will explain this later.

ATTEMPTED DEFINITIONS

Let $S,T$ be semigroups. A function $\varphi:S\to T$ is a homomorphism if it satisfies $$\varphi(ab)=\varphi(a)\varphi(b).\tag1$$

If $\rho\subseteq S\times T$ is a binary relation, the following also makes sense:

$$\rho(ab)=\rho(a)\rho(b),\tag2$$

but now $\rho(ab),\rho(a),\rho(b)$ are not elements of $T$, but of $2^T$. Yet $(2)$ makes sense, because $2^T$ also has a structure of a semigroup, with the operation defined by $$AB=\{ab\,|\,a\in A,\,b\in B\}.$$

This gives an idea for a definition.

Definition 1. $\rho$ is element-wise homomorphic iff for any $a,b\in S,$ we have $$\rho(ab)=\rho(a)\rho(b).$$

But there's another idea. Every function $\varphi:S\to T$ can be extended to a function $\overline\varphi : 2^S\to 2^T$ as follows. $$\overline\varphi(A)=\varphi(A),$$ where $\varphi(A)$ denotes the image of $A$ under $\varphi.$ Obviously, $$\overline\varphi(\{a\})=\{\varphi(a)\}.$$

Now let $\varphi$ be a homomorphism.

Fact 1. $\overline\varphi$ is a semigroup homomorphism from $2^S$ to $2^T$.

Proof. Let $A,B\subseteq S.$ Then

$$ \begin{eqnarray} \overline\varphi(AB)&=&\{\varphi(ab)\,|\, a\in A,\,b\in B\}\\ &=&\{\varphi(a)\varphi(b)\,|\,a\in A,\,b\in B\}\\ &=&\{xy\,|\,x\in\varphi(A),\,y\in\varphi(B)\}\\ &=&\{xy\,|\,x\in\overline\varphi(A),\,y\in\overline\varphi(B)\}\\ &=&\overline\varphi(A)\overline\varphi(B). \end{eqnarray} $$

Also, it is obvious that if $\overline\varphi$ is a homomorphism, then $\varphi$ is a homomorphism too, because it essentially a restriction of $\overline\varphi$ to the subsemigroup $S$ of the semigroup $2^S.$ This gives us

Fact 2. $\varphi:S\to T$ is a homomorphism iff $\overline\varphi:2^S\to 2^T$ is a homomorphism.

Now $\rho$ also induces a function $\overline\rho:2^S\to 2^T$ by $$\overline\rho(A)=\rho(A),$$ where $\rho(A)$ denotes the image of $A$ under $\rho.$ This gives another idea for a definition.

Definition 2. $\rho$ is set-wise homomorphic iff $\overline\rho:2^S\to 2^T$ is a semigroup homomorphism.

WHY I THINK IT DOESN'T WORK

I think a good definition of a homomorphic relation $\rho$ should give that $\rho^{-1}$ is also homomorphic. This would be a generalization of the fact that a bijective homomorphism is an isomorphism. (That is, we don't need to check whether the inverse function is a homomorphism.) Both definitions fail here. Let $\varphi:\mathbb Z\to\mathbb Z$ be the trivial homomorphism. Then $$\overline{\left(\varphi^{-1}\right)}(\{1\}+\{-1\})=\overline{\left(\varphi^{-1}\right)}(\{0\})=\mathbb Z.$$

But $$\overline{\left(\varphi^{-1}\right)}(\{1\})=\varnothing$$ and $$\overline{\left(\varphi^{-1}\right)}(\{-1\})=\varnothing,$$ so

$$\overline{\left(\varphi^{-1}\right)}(\{1\})\overline{\left(\varphi^{-1}\right)}(\{-1\})=\varnothing.$$

QUESTIONS

(1) Is there a standard definition of a "homomorphic binary relation" for

  • semigroups?
  • other algebraic structures?

(2) Has anything similar to the definitions I'm giving been tried in literature?

(3) When defining a "homomorphic binary relation", what could make one defintion more sensible than another? (This is imprecise of course, but I want to ask it in case someone has a precise answer.)


Solution 1:

This is a question by bygone asker/user, who apparently was satisfied with answer it got, but for the sake of not leaving the question half-answered (from my perspective), I'll point out that a notion strictly on semigroups does appear in Howie's Fundamentals of Semigroup Theory in an exercise on p. 42. I'll reproduce it here with Howie's notation, who writes function and relations to the right. If $\rho$ is a relation on $A\times B$ (i.e. subset thereof), you can have a notion of "mapping" along the lines of a (union-of-outputs) multi-valued function by

$$a\rho = \{b\in B:(a,b)\in \rho\}$$

If $S$ and $T$ are semigroups, define a relational morphism $\mu$ from $S$ to $T$ as satisfying both the following

RM1: $[\forall a\in S]\, a\mu\neq \emptyset$
RM2: $[\forall a,b\in S]\, (a\mu)(b\mu) \subseteq (ab)\mu$.

Furthermore, this relational morphism is said injective if it also satisfies

RM3: $[\forall a,b\in S]\, a\mu\cap b\mu \neq \emptyset \Rightarrow a\mu = b\mu$.

It is left as exercise to the reader to show that every relational morphism is a subsemigroup of the direct product $S\times T$ (seen as a semigroup).

A closer look at the definitions above sees that only RM2 actually has something to do with the semigroups operations; RM1 and RM3 only deal with properties of the multivalued-function-like application of $\rho$, which by itself does not get baptized in any way by Howie.

Anyhow, Howie goes on to define divisibility of semigroups in the usual manner, i.e. using run-of-the-mill function-based [homo]morphim as: $S$ divides $T$ if there exists a subsemigroup $U$ of $T$ and a [run-of-the-mill, function-based] [homo]morphism $\psi$ from $U$ onto $S$; so, equivalently, $S$ is a quotient subsemigroup of $T$.

As the final point of the exercise (relating the two notions of morphism), you are asked to show that $S$ divides $T$ if and only if there exists an injective relational morphism from $S$ to $T$.

Alas from this I conclude that the notion of relational morphism doesn't appear particularly fruitful since all we used it for is to say something we could say just as nicely with function-based [homo]morphims.

Howie later notes on p. 44:

The idea of a relational morphism appears in Eilenberg (1976) within the chapters written by Tilson, and is much used in applications of semigroups to language theory—see Pin (1986). It might have been more logical to call it a morphic relation, but the term 'relational morphism' is now standard.

The more detailed refs cited are:

  • Eilenberg, S. (1976). Automata, languages and machines, Vol. B. Academic Press, New York.
  • Pin, J.-E. (1986). Varieties of formal languages, North Oxford Academic Publishers, London, (A translation of Varietes de lagages formels. Masson, Paris, 1984.)

I haven't looked at those references in-depth to see why/how this notion of relational morphism is fruitful.

However, Pin says that any relational morphims as above factorizes through its "inverse projections", i.e. if $\mu$ is a relational morphism from $S$ to $T$, then the graph of $\mu$, i.e. $\{(a,b) : b\in a\mu\}$ is a subsemigroup of $S \times T$ (as already said). If we denote this subsemigroup by $R$, then there exist [function-based] morphism $\alpha : R \to S$ and $\beta : R \to T$, such that $\alpha$ is surjective and $\mu = \alpha^{-1}\beta$, where $\alpha$ is inverted as a relation. Given that this decomposition is possible for every relational morphism, Pin says:

Thus, if one has trouble to think in terms of relational morphims, one can always go back to usual morphisms.

Tilson introduced some additional terminology and has some insightful observations. He calls a relation that satisfies RM1 a fully defined relation. As we've already noted RM3 can be applied to relations in general which can be called injective. If a relation is injective, then its inverse is a partial function. Furthermore if an injective relation is fully defined then its inverse is a surjective partial function.

Relations satisfying RM1 have also been called left-total and the injective ones have also been called left-unique by Kilp, Knauer and Mikhalev. Or another way of putting it is that RM1+RM3 are the injective multi-valued functions.


A deeper result involving relational morphisms appears in Lambek's paper "The Butterfly and the Serpent", which I really enjoyed reading.

First I'll recast the above definitions in Lambek's terminology. Lambek defines a homomorphic relation to be a binary relation $\rho \subseteq A\times B$ whose graph is a subalgebra of the direct product. If $\rho$ additionally meets RM1, Lambek calls it universally defined, which he expresses using the equivalent definition $1_A \subseteq \rho^\vee\rho$, where $\rho^\vee$ is the inverse relation; Lambek writes relations in infix notation with respect to their arguments and composes them right-to-left: $c (\sigma\rho) a$ iff $[\exists b]\, c\sigma b \wedge b\rho a$. Note that unlike the other authors mentioned, Lambek does not include RM1 in his definition of homomorphic relation, so Lambek's definition is the only one mentioned here that coincides with the one that @Qiaochu Yuan gave in his answer above.

And finally we get to non-trivial facts. Lambek gives the following equivalent characterizations of a Maltsev variety (of which examples are plentiful, as you probably know, including groups, quasigroups, Heyting algebras, complemented lattices):

  • (the usual one) there exists a ternary term $f\, xyz$ such that $f\, xyy = x$ and $f\, yyz = z$
  • (what Maltsev proved) any two congruences permute; equivalently their join in the lattice of congruences is equal to their composition, so $\rho\sigma=\sigma\rho=\rho\sqcup\sigma$
  • (finally we get to) any homomorphic relation is difunctional, meaning that $\rho\rho^\vee\rho = \rho$. This property can be restated as $\rho$ being a regular element using the usual notion from semigroup theory, although Lambek expresses his displeasure with this latter terminology.
  • (one more) any reflexive homomorphic relation is a congruence relation.

Homomorphic relations are perhaps not that exciting by themselves, but a derived notion surely gets interesting. Define a subcongruence to be a homomorphic relation from $A$ to $A$ that is symmetric ($\rho^\vee \subseteq \rho$) and transitive ($\rho\rho \subseteq \rho$), but not necessarily reflexive ($1_A \subseteq \rho$).

The notion of subcongruence allows a nice characterization of a generalization of Maltsev varieties called Grousat varieties, which can be defined by any of the following equivalent statements:

  • there exist two a ternary terms $f\, xyz$ and $g\, xyz$ such that $f\, xyy = x$, $f\, yyz = g\, yzz$, and $g\, yyz = z$
  • any two congruences 3-permute, meaning that $\rho\sigma\rho=\sigma\rho\sigma=\rho\sqcup\sigma$, i.e that equality is also their join in the lattice of congruences
  • any homomorphic relation satisfies $\rho\rho^\vee\rho\rho^\vee = \rho\rho^\vee$, which is saying that $\rho\rho^\vee$ is an idempotent.

The punchline of Lambek's paper is that one can state a general version of Grousat's theorem in a Grousat variety. Furthermore, in the variety of groups, one recovers Zassenhaus' butterfly lemma as a consequence of this formulation of Grousat's theorem.

Before I end this (rather long post!) with Grousat's theorem, we need one more definition, which alas Lambek only gives as a formula and doesn't put any name to it. Given a subcongruence $\sigma$ on $A$, define $\mathrm{Dom}\,\sigma = \{a\in A : a \sigma a\}$. Since $\sigma$ is not necessarily reflexive, it makes good sense to consider this set. I suppose this notation can be read as "domain" of $\sigma$, but that seems a little misleading given $A$; perhaps reading it as the [sub]diagonal of $\sigma$ makes more sense.

Anyway, Goursat's Theorem:

If $\rho$ is any homomorphic relation from $A$ to $B$ such that $\rho^\vee\rho$ and $\rho\rho^\vee$ are subcongruences of $A$ and $B$ respectively, in particular if $\rho$ is any homomorphic relation whatsoever between two algebras in a Goursat variety, then $\mathrm{Dom}\,\rho\rho^\vee / \rho\rho^\vee \simeq \mathrm{Dom}\, \rho^\vee\rho / \rho^\vee\rho$.

Since long MathJaX posts get horribly slow to edit (some $O(n^2)$ algorithm at work probably), I'll stop here by just mentioning that Lambek shows that one can derive the Snake Lemma from Goursat's Theorem as well.

Solution 2:

Instead of relations I think it is a better idea to work with spans, or correspondences. A span between two objects $a$ and $b$ in a category $C$ is a diagram of the form $a \leftarrow c \to b$. Spans admit a notion of composition described for example at the nLab in any category with pullbacks (which includes in particular any algebraic category), giving a category $\text{Span}(C)$. If $a$ and $b$ are sets and $R$ a relation between them, then letting $c = \{ (x, y) : x \in a, y \in b, xRy \}$ and thinking of the obvious projections $c \to a, c \to b$ we see that the category of relations embeds into the category of spans of sets. In general every category $C$ embeds into $\text{Span}(C)$ (just take $c = a$ and the map $c \to a$ to be the identity).

Recall that a relation between two sets $S, T$ can be described as a subobject of $S \times T$, but this is just some special kind of morphism $R \to S \times T$ and in full generality there's no reason to impose any additional requirements on this morphism. If you wanted to, though, use the following.

Definition: A relational homomorphism between two algebraic structures $R, S$ is a substructure of $R \times S$.

For example a relational homomorphism between two semigroups is a subsemigroup of $R \times S$.