Is a formal language an algebraic structure?
Solution 1:
Your language is definitely an algebraic gadget. It's context free, and any context free language can be viewed as a kind of "algebra". This paper of Chomsky and Schützenberger (The Algebraic Theory of Context-Free Languages) is a nice (and readable) historically relevant view of the subject, and discusses a tight (and very interesting!) connection between formal languages and formal power series. This is analogous to the connection between a sequence of integers and its generating function, but there's actually something more overtly algebraic you can do. I'll summarize that here, but you can read more in any decent text on universal algebra.
We think of a grammar as telling us which words we can build from pre-existing words. For your example, we have a fairly simple grammar:
- $x$ is a word for every variable/atomic proposition $x$
- If $\varphi$ is a preexisting word, $\lnot \varphi$ is a word
- If $\varphi$ and $\psi$ are preexisting words, then $\varphi \land \psi$, $\varphi \lor \psi$, $\varphi \to \psi$, etc. are words
Now, as you've expected, these form an algebra (indeed a free algebra) which is often called the "algebra of terms" because the elements of this algebra are terms :P.
Formally, we look at the set of all words generated by this grammar, which you're calling $W$. Then we can add operations for each constructor. So for us, there is a unary operation $\lnot : W \to W$, a binary operation $\land : W^2 \to W$, etc. (NB: for general context free grammars, some of our operators might only be partially defined. We can get around this by working with multi-sorted algebras), and while we could stop here, we almost certainly want to consider some of these words "equal". That is, we should probably think of $\varphi \land (\psi \land \theta)$ as the same as $(\varphi \land \psi) \land \theta$, even though they're technically different words. Similarly, we might want to identify the words $\varphi \land (\psi \lor \theta)$ and $(\varphi \land \psi) \lor (\varphi \land \theta)$, since in applications these are usually equivalent formulas. So we can quotient our term algebra (that is, $W$) by the equivalence relation saying that these words are equivalent. It turns out that this is also a computable thing to do, if you're interested in such things, but I won't say more about that here.
Now, this particular algebraic structure (one with operators $\lnot$, $\land$, $\lor$, etc. satisfying some natural axioms) is very well studied. It's called a Heyting Algebra, and what we've built here is the free heyting algebra on our variables! I won't say more here, but you should know that this is an extremely useful construction in mathematical logic. If you have more general context free languages (where we possibly identify words) we again get very useful things, oftentimes "free" structures (free groups can be constructed in this way), and we actually formally describe programming languages in this way!
Lastly, one reason to care about free algebras/term algebras is because it's very easy to define functions out of them. We have this nice recursive definition, so if we have any other algebra of the same type, all we need to do is say what happens to the variables. You've actually partially rediscovered this yourself in your extension of $v$. Since our syntax is our intended semantics, if we have any heyting algebra $H$, any (set theoretic) function $v : \mathsf{Variables} \to H$ extends uniquely to a homomorphism of heyting algebras $v : W \to H$. Indeed, we recursively do exactly what you expect:
- $v(x) = v(x)$
- $v(\lnot \varphi) = \lnot v(\varphi)$
- $v(\varphi \land \psi) = v(\varphi) \land v(\psi)$
- etc.
I hope this helps ^_^