Whence this generalization of linear (in)dependence?
I recently came across a definition of (in)dependence that is supposed to be a generalization of linear (in)dependence among a set of vectors:
An element $x$ is dependent on a set of elements $\{a_1, \cdots, a_n\}$ iff any two real-valued additive functions that take equal values on each of the elements $a_1, \cdots, a_n$ also agree on $x$. A set of elements $\{a_1,\cdots,a_n\}$ is independent iff no member of the set is dependent on the rest.
The term "additive function" used in this definition describes a (real-valued) function $f$ such that $f(a + b) = f(a) + f(b)\;\; \forall a, b \in \mathrm{dom}(f)$. Of course, this presupposes that some form of addition is defined in the domain of such $f$. It is safe to assume that this domain is an Abelian group, but you may assume additional structure if this is necessary to answer my question.
I find this definition of dependence much harder to think about and work with than the standard definition of linear dependence for vectors, so I wonder what's the justification for such a counterintuitive definition.
Is anyone familiar with this way of defining dependence/independence? If so, what algebraic structure is it used for?
Solution 1:
So, I don't know where the definition came from, or whether it is inspired by the following, but it reminds me of the notion of dominion.
The notion was introduced by John Isbell in 1965 as a tool to study epimorphisms. (Epimorphisms and dominions, 1966 Proc. Conf. Categorical Algebra (La Jolla, Calif., 1965). pp. 232-246, Springer-Verlag, NY; MR 0209202).
Definition. Let $\mathcal{C}$ be a category of algebras (in the sense of universal algebra), and let $A\in\mathcal{C}$. Given a subset $S$ of $A$, we say that $S$ dominates $a\in A$ if and only for every object $B\in\mathcal{C}$ and every pair of $\mathcal{C}$-morphisms $f,g\colon A\to B$, if $f(s)=g(s)$ for all $s\in S$, then $f(a)=g(a)$. The collection of all $a\in A$ dominated by $S$ is called the dominion of $S$ in $A$ (relative to $\mathcal{C}$). We denote it by $\mathrm{dom}^{\mathcal{C}}_A(S)$, with $\mathcal{C}$ omitted if it is understood from context.
First, because morphisms respect the structure, it should be clear that the dominion of $S$ always contains the subalgebra generated by $S$. It is not hard to show that the dominion is a subalgebra, and in fact that the dominion construction is a closure operator on the lattice of subalgebras of $A$.
The connection with epimorphisms is the following: an epimorphism is a right-cancellable morphism. That is, $f\colon X\to Y$ is an epimorphism if for every $g,h\colon Y\to Z$, if $gf = hf$, then $g=h$. In the context of categories of algebras, $f$ is an epimorphism if and only if the dominion of $f(X)$ in $Y$ is all of $Y$.
Moreover, if the category $\mathcal{C}$ is closed under subobjects and quotients, then every morphism $f\colon X\to Y$ can be factored into a canonical projection $X\to X/\mathrm{Ker}(f)$ (where $\mathrm{Ker}(f)$ is the congruence of all $(a,b)\in X\times X$ with $f(a)=f(b)$), which is well understood, followed by the immersion $X/\mathrm{Ker}(f)\hookrightarrow f(X)\subseteq Y$, so that we really only need to "understand" dominions of subalgebras in order to understand epimorphisms.
The concept also makes sense in other settings, specifically when the objects in your category are "enriched sets" and the morphisms are set-theoretic maps.
Examples.
-
In the category of abelian groups, $\mathrm{dom}_A(B)=B$ for all groups $A$ and subgroups $B$. To see this, note that the canonical projection $A\to A/B$ and the zero map $A\to A/B$ agree on $B$ and nowhere else, so no element of $A$ not in $B$ can be in the dominion.
-
More generally, in the category of (left) $R$-modules, for all (left) modules $M$ and submodules $N$, $\mathrm{dom}_M(N) = N$. This can again be verified by comparing the canonical projection $M\to M/N$ and the zero map.
-
In particular, in the category of $k$-vector spaces, the dominion of a subspace is the subspace itself. Hence, the dominion of a subset $S\subseteq V$ is precisely the span of $V$.
Note that this gives a way to interpret linear dependence and linear independence along the lines of your query, which is precisely what made the connection in my head: say $S=\{s_1,\ldots,s_n\}$ is a subset of the vector space $V$. Then $x$ is dominated by $S$ ("depends on $S$") if and only for any pair of linear transformations $f,g$ with domain $V$ and same codomain, if $f(s_i)=g(s_i)$ for all $i$, then $f(x)=g(x)$. This occurs if and only if $x$ is a linear combination of the $s_i$, if and only if $x$ lies in the span of $S$. We could say a set $S$ is "redundant" of there exists $x\in S$ such that $S\setminus\{x\}$ dominates $x$, so that "non-redundant" sets would correspond to linearly independent sets.
-
In the category of all groups, we also have that $\mathrm{dom}_G(H) = H$ for all groups $G$ and subgroups $H$. The simplest way to see this is to use the amalgamated free product $G*_HG$ of $G$ with itself over $H$, and the two canonical embeddings of $G$ into this product; they agree precisely on $H$ and nowhere else.
-
On the other hand, in the category of semigroups, we can find semigroups $S$ and subsemigroups $T$ such that $\mathrm{dom}_S(T)\neq T$. For a trivial example, take a group $G$, and a subsemigroup $T$ that is not a group. Since the image of a group under a semigroup homomorphism must be a group, the dominion of $T$ in $G$ relative to the category of semigroups is in fact the subgroup generated by $T$. Thus, for instance, the dominion of $\mathbb{N}$ in $\mathbb{Z}$ is $\mathbb{Z}$. There are plenty of less silly examples, and Isbell gave an intrinsic description of exactly when an element of $S$ is dominated by the subsemigroup $T$, in terms of certain connected factorizations. It's called the Zigzag Lemma for Semigroups. It can be something strictly between $T$ and $S$, all of $S$, just $T$, etc.
-
For monoids (semigroups with identity), commutative semigroups, and commutative monoids, the dominion coincides with the dominion when we consider the objects as lying in the larger category of all semigroups. This is not obvious, since enlarging your category will, in general, tend to "shrink" the dominion (there are more pairs of maps to consider). This is a theorem of Isbell and Howie (Isbell, J.R. and Howie, J.M., Epimorphisms and dominions II, J. Algebra 6 (1967) pp 7-21, MR0209203).
-
The category of rings also has nontrivial examples. For instance, any two ring homomorphism with domain $\mathbb{Q}$ that agree on $\mathbb{Z}$ must agree on all of $\mathbb{Q}$ (that is, $\mathbb{Z}\hookrightarrow \mathbb{Q}$ is an epimorphisms). The precise description of the dominion is complicated (Isbell got it wrong in his first paper).
-
For a non-algebraic example, in the category of Hausdorff topological spaces, we have that $\mathrm{dom}_X(Y) = \overline{Y}$, the closure of $Y$, since any two continuous maps in a Hausdorff space that agree on a set must agree on its limit points, so the closure of $Y$ is certainly contained in the dominion. For the converse inclusion, consider $X/\sim$, where $\sim$ is the relation that identifies all points of $\overline{Y}$ to a single point, and consider the constant map to the single point and the canonical projection $X\to X/\sim$.
If we take the definition you give and allow additive maps into any semigroup, then it describes precisely when $x$ is in dominion of $\{x_1,\ldots,x_n\}$ in the category of (commutative) semigroups. I don't know off-hand whether restricting the codomain to $\mathbb{R}$ will necessarily "enlarge" the dominion if you start with a semigroup only (I suspect it would).
But if you are starting with a vector space, then: if the ground field is of positive characteristic, then an additive function into $\mathbb{R}$ must be the zero function; so every vector is dominated/depends on any set, and the only independent set is the empty set. If the ground field is characteristic zero, then additive functions into $\mathbb{R}$ must be $\mathbb{Q}$-linear, and the concept you are defining is just $\mathbb{Q}$-linear independence and $\mathbb{Q}$-linear dependence.
Solution 2:
(Posting as CW because this is actually a reposting of Arturo Magidin's comment to my now deleted answer.)
If your domain is actually a vector space $V$ over some field $K$, let $F\subset K$ be its prime field, then the field extension $K$ is a vector field over $F$, and therefore $V$ is also a vector field over $F$. Consider functions from $V$ to $K$.
Then we have the string of implications that $K$-linearity implies additivity implies $F$-linearity. But if $K$ is a proper extension of $F$, then one can always find a function on $V$ that is $F$-linear but not $K$-linear.
Proof: That $K$-linearity implies additivity is trivial. In the case that the $F$ is a prime finite field $\mathbb{Z}/p\mathbb{Z}$, then multiplication by any element of $F$ is equivalent to the multiplication by a number between $0$ and $p-1$, and hence can be obtained by repeated addition. In the case that $F$ is $\mathbb{Q}$, additivity implies $\mathbb{Z}$-linearity, and then using that $f(v) = nf(v/n)$ you get $\mathbb{Q}$-linearity.
For the second statement, since $K$ is a vector space over $F$ with dimension bigger than 1, we can find a basis $a_i$ of $K$ as vector space over $F$, such that $a_1 = 1$. Let $b_j$ be a basis of $V$ as vector space over $K$. Then the elements $\{a_i b_j\}$ form a basis for $V$ as a vector space over $F$. Consider the function $f$ such that $f(a_ib_j) = 1$ if $i = 1$ and $0$ otherwise, and extend $F$-linearly. This function additive but not $K$ linear.