Haskell's algebraic data types
I'm trying to fully understand all of Haskell's concepts.
In what ways are algebraic data types similar to generic types, e.g., in C# and Java? And how are they different? What's so algebraic about them anyway?
I'm familiar with universal algebra and its rings and fields, but I only have a vague idea of how Haskell's types work.
Haskell's algebraic data types are named such since they correspond to an initial algebra in category theory, giving us some laws, some operations and some symbols to manipulate. We may even use algebraic notation for describing regular data structures, where:
-
+
represents sum types (disjoint unions, e.g.Either
). -
•
represents product types (e.g. structs or tuples) -
X
for the singleton type (e.g.data X a = X a
) -
1
for the unit type()
- and
μ
for the least fixed point (e.g. recursive types), usually implicit.
with some additional notation:
-
X²
forX•X
In fact, you might say (following Brent Yorgey) that a Haskell data type is regular if it can be expressed in terms of 1
, X
, +
, •
, and a least fixed point.
With this notation, we can concisely describe many regular data structures:
-
Units:
data () = ()
1
-
Options:
data Maybe a = Nothing | Just a
1 + X
-
Lists:
data [a] = [] | a : [a]
L = 1+X•L
-
Binary trees:
data BTree a = Empty | Node a (BTree a) (BTree a)
B = 1 + X•B²
Other operations hold (taken from Brent Yorgey's paper, listed in the references):
-
Expansion: unfolding the fix point can be helpful for thinking about lists.
L = 1 + X + X² + X³ + ...
(that is, lists are either empty, or they have one element, or two elements, or three, or ...) -
Composition,
◦
, given typesF
andG
, the compositionF ◦ G
is a type which builds “F-structures made out of G-structures” (e.g.R = X • (L ◦ R)
,whereL
is lists, is a rose tree. -
Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus:
1′ = 0
X′ = 1
(F + G)′ = F' + G′
(F • G)′ = F • G′ + F′ • G
(F ◦ G)′ = (F′ ◦ G) • G′
References:
- Species and Functors and Types, Oh My!, Brent A. Yorgey, Haskell’10, September 30, 2010, Baltimore, Maryland, USA
- Clowns to the left of me, jokers to the right (Dissecting Data Structures), Conor McBride POPL 2008
"Algebraic Data Types" in Haskell support full parametric polymorphism, which is the more technically correct name for generics, as a simple example the list data type:
data List a = Cons a (List a) | Nil
Is equivalent (as much as is possible, and ignoring non-strict evaluation, etc) to
class List<a> {
class Cons : List<a> {
a head;
List<a> tail;
}
class Nil : List<a> {}
}
Of course Haskell's type system allows more ... interesting use of type parameters but this is just a simple example. With regards to the "Algebraic Type" name, i've honestly never been entirely sure of the exact reason for them being named that, but have assumed that it's due the mathematical underpinnings of the type system. I believe that the reason boils down to the theoretical definition of an ADT being the "product of a set of constructors", however it's been a couple of years since i escaped university so i can no longer remember the specifics.
[Edit: Thanks to Chris Conway for pointing out my foolish error, ADT are of course sum types, the constructors providing the product/tuple of fields]
In universal algebra an algebra consists of some sets of elements (think of each set as the set of values of a type) and some operations, which map elements to elements.
For example, suppose you have a type of "list elements" and a type of "lists". As operations you have the "empty list", which is a 0-argument function returning a "list", and a "cons" function which takes two arguments, a "list element" and a "list", and produce a "list".
At this point there are many algebras that fit the description, as two undesirable things may happen:
There could be elements in the "list" set which cannot be built from the "empty list" and the "cons operation", so-called "junk". This could be lists starting from some element that fell from the sky, or loops without a beginning, or infinite lists.
The results of "cons" applied to different arguments could be equal, e.g. consing an element to a non-empty list could be equal to the empty list. This is sometimes called "confusion".
An algebra which has neither of these undesirable properties is called initial, and this is the intended meaning of the abstract data type.
The name initial derives from the property that there is exactly one homomorphism from the initial algebra to any given algebra. Essentially you can evaluate the value of a list by applying the operations in the other algebra, and the result is well-defined.
It gets more complicated for polymorphic types ...
A simple reason why they are called algebraic; there are both sum (logical disjunction) and product (logical conjunction) types. A sum type is a discriminated union, e.g:
data Bool = False | True
A product type is a type with multiple parameters:
data Pair a b = Pair a b
In O'Caml "product" is made more explicit:
type 'a 'b pair = Pair of 'a * 'b
Haskell's datatypes are called "algebraic" because of their connection to categorical initial algebras. But that way lies madness.
@olliej: ADTs are actually "sum" types. Tuples are products.