Why don't we study algebraic objects with more than two operations?
Undergraduates learn about algebraic objects with one operation, namely groups, and we learn about algebraic objects with two "compatible" operations, namely rings and fields. It seems natural to then look at algebraic objects with three or more operations that are compatible, but we don't learn about them. I asked one of my professors why this is so and she posed this question in response: we have left-modules and right-modules, but there are no top-modules or bottom-modules, or any other ways of combining two elements to produce a third. I can't think of any satisfactory answer to either of these questions. Can anyone shed any light on them?
Edit: Now that I know these objects are studied, what I meant by "we" is essentially "Why are these objects not introduced to undergraduates (at least in a standard curriculum) given how natural they seem?"
Solution 1:
While it is true that algebraic structures based on binary operations are very common, other structures exist, are being studied, and are very important. Examples include:
$A_\infty $-spaces, where homotopy considerations mandate not just one binary operation, but an $n$-ary operation for all $n\in \mathbb N$.
Malcev operations, are examples of structures based on ternary operations.
operads also have $n$-ary operations.
For many of the structures above things like modules and actions make sense, and themselves involve $n$-ary operations.
So, mathematicians certainly do study such structures. Perhaps the reason they are not commonly introduced at the undergraduate level is that these structures are more complicated than ones based on binary operations.
As for the comment made by your professor, I can think of many ways elements can be combined to give a new element, so I really don't know what is meant by that.
And, since you are wondering if such $n$-ary based algebraic structures are too complicated for undergraduates, I'll just mention that of the three I mention above, $A_\infty $ - space are quite complicated but Malcev operations and operads are not. Operads come in many flavours, and if one considers what are known as coloured planar non-enriched operads (probably the simplest kind of operad) then this is a structure that can be understood by a first year student (and this is actually quite an important class of operads, so not just a toy algebraic structure). The reason why these structures are not introduced early on has more to do with the fact that university curricula and textbooks change and adapt very slowly. They very rarely reflect current trends. In 100 years it is likely that operads will make it to first year or second year textbooks, much like groups do today.
And while on the subject, one must also consider algebraic structures with operations of infinite arity. These too exist and provide some surprising examples. For instance, it is classical result that the category of compact Hausdorff spaces is algebraic, and that means that that category can be thought of as consisting of algebraic structures with operations of arity $\infty $. Other important examples include complete lattices.
Solution 2:
One reason that higher-arity operations are less common is that they can always be replaced by compositions of binary operations. During the 1930's and 1940's Sierpinski researched compositions of operations ("clones") and proved that every $n$-ary operation on a set is a finite composition of binary operations on the set, see W. Sierpinski, Sur les fonctions de plusieurs variables, Fund. Math. 33 (1945), 169-173.
A proof is especially simple for operations on a finite set $\rm\:A\:.\:$ Namely, if $\rm\:|A| = n\:$ then we may encode $\rm\:A\:$ by $\rm\:\mathbb Z/n\:,\:$ the ring of integers $\rm\:mod\ n\:,\:$ allowing us to employ Lagrange interpolation to represent any finitary operation as a finite composition of the binary operations $\rm\: +,\ *\:,\:$ and $\rm\: \delta(a,b) = 1\ if\ a=b\ else\ 0\:,\:$ namely
$$\rm f(x_1,\ldots,x_n)\ = \sum_{(a_1,\ldots,a_n)\ \in\ A^n}\ f(a_1,\ldots,a_n)\ \prod_{i\ =\ 1}^n\ \delta(x_i,a_i) $$
When $\rm\:|A|\:$ is infinite one may instead proceed by employing pairing functions $\rm\:A^2\to A\:.$
For further remarks and references see this answer.
Solution 3:
We do. For example, differential fields (fields equipped with a derivation) and exponential fields (fields equipped with an exponentiation).
Solution 4:
The reason there are only "left" and "right" modules has to do with the fact that the operation laws involved are binary operations. Since they take only two inputs, there are two possible ways for the operations to be performed.
Here is what I mean: A left $R$ module is one for which the action of $R$ on $M$ is biadditive, and additionally $(rs)m=r(sm)$. A left $R$ module is one for which the action of $R$ on $M$ is biadditive, and additionally $(sr)m=r(sm)$. Now the last thing I've written is almost always suggestively written on the other side as $m(sr)=(ms)r$, but I'm writing it on the left side to highlight that it doesn't matter what side you write it on, what matters is the order of the two operations $r$ and $s$. You can't combine $r$ and $s$ in more than these two ways (with $R$'s multiplication).
Solution 5:
As is clear now from the other nice answers, we do study things with more than three binary operations. Especially if you are interested in physics or symplectic geometry: Poisson algebras. These are algebras (thus come right away with addition and multiplication) which are simultaneously Lie algebras (so have a third binary operation, usually written $\{a,b\}$ and called the Poisson bracket) in a compatible way: for each element $a$ of the algebra, the Poisson bracket $\{a,\cdot\}$ defines a derivation of the underlying algebra structure.
Probably the most important source of examples comes from symplectic geometry: given a symplectic manifold, its structure sheaf is a sheaf of Poisson algebras.
The only reason such things are not (usually) discussed in undergraduate mathematics is inertia. They should be.