First order logic - why do we need function symbols?

Using function symbols in first order logic forces us to define "terms" inductively, which makes many proofs longer and much more tedious. Of course, function symbols simplify matters when trying to use first order logic to describe things, but on the surface it seems to me they could be replaced completely by relations: Instead of $f$ use a relation $R_f$ such that instead of writing $\varphi(f(x))$ write ($\forall x\exists! y(R(x,y))\wedge R(x,y)\wedge\varphi(y)$. Now use $f(x)$ as a shorthand notation, so you can use it in "real life" but avoid it in proofs.

I guess I'm missing some deep neccecity here, but what?

(The same goes for constant symbols, but they don't really complicate things as function symbols do).


You can always take a first-order theory with constants and function symbols and replace it with a theory using only relation symbols. This is at the expense of complicating the theory, since if, for example, $R$ is the relational symbol representing the binary functional symbol $f$, we need an axiom of the form $$( \forall x ) ( \forall y ) ( \exists z ) ( \forall u ) ( R(x,y,u) \leftrightarrow u = z ).$$ But since $R$ and $f$ are definable from one another, there would be a natural translation of theorems of one theory into the other.

Another expense is the statement of certain theorems of model theory. For example, a theory $T$ is said to admit elimination of quantifiers if for every formula $\phi (x_1 , \ldots x_n )$ there is a quantifier-free formula $\psi (x_1 , \ldots x_n )$ such that $$T \vdash (\forall x_1 ) \cdots ( \forall x_n ) ( \phi \leftrightarrow \psi ).$$ The basic example of a theory admitting elimination of quantifiers is the theory of real closed fields. Note that if the underlying language has no constant symbols, then there are no quantifier-free sentences, and therefore no theory without constant symbols can admit elimination of quantifiers. One would have to alter the definition so as to only concern those formulas $\phi$ with at least one free variable. One can show that this would be faithful, but it is also a somewhat less natural definition.


Formally, you're quite right: function symbols could be "defined out" by using relations. But the notion of terms make a great many things vastly more natural.

Among the features we would like to have in our first-order logic is that the structure of a proof of a mathematical statement in some theory of FOL (say, the theory of groups) structurally mirrors the proof in natural-language mathematics. In natural-language mathematics, though, we use functions (and operations) all the time, and it is frequently very unnatural to think of them as relations. This is still a little fluffy, though. A good example of how this surfaces in logic is when we work in an elementary fashion with model embeddings. Working with terms in that context allows you to very naturally exploit the perspective that the objects in a model arise -- are "created" if you will -- by closing under functions and operations. Without terms in the language, it becomes very difficult to employ that sort of reasoning in mediating between semantic and syntactic issues.