What is the purpose of free variables in first order logic?
Solution 1:
Consider the informal argument 'Everyone loves pizza. Anyone who loves pizza loves ice-cream. So everyone loves ice-cream.' Why is that valid?
Roughly: Pick someone, whoever you like. Then, s/he loves pizza. And so s/he loves ice-cream. But sh/e was arbitrarily chosen. So everyone loves ice-cream.
This informal argument can be spelt out with plodding laboriousness with explicit commentary like this:
Everyone loves pizza. (That's given)
Anyone who loves pizza loves ice-cream. (That's given too)
Take some arbitrary person, call her Alice. Then Alice loves pizza (From 1)
If Alice loves pizza, she loves ice-cream (From 2)
Alice loves ice-cream (From 3, 4, by modus ponens)
But Alice was an arbitrary representative person, so what applies to her applies to anyone: so everyone loves ice-cream.
Now consider the formal analogue of this proof (using $F$ for 'loves pizza" etc.)
$\forall xFx\quad\quad\quad\quad$ Premiss
$\forall x(Fx \to Gx)\quad$ Premiss
$Fa\quad\quad\quad\quad\quad$ From 1., Univ. Instantiation
$(Fa \to Ga)\quad\quad$ From 2., Univ. Instantiation
$Ga\quad\quad\quad\quad\quad$ From 3, 4 by MP
$\forall xGx\quad\quad\quad\quad$ From 5, Univ. Generalization
(UG on $a$ is legimitate as $a$ appears in no assumption on which (5) depends, so can be thought of as indicating an arbitrary representative member of the domain.)
So note, that we need here some symbol playing the role of $a$, not a true constant with a fixed interpretation, not a bound variable, but (as common jargon has it) a parameter which stands in, in some sense, for an arbitrary element of the domaain.
Now, in some syntaxes, variables $x$, $y$ etc. only ever appear bound, as part of quantified sentences. And parameters are typographically quite distinct, $a$ and $b$, etc. This is the usage in Gentzen, for example.
But another tradition (probably more common, but not for that reason to be preferred), we are typographically economical, and re-cycle the same letters as both true variables and as parameters. In other words, we allow the same letters to appear both as "bound" variables and as "free" variables. Then the formal proof will look like this:
$\forall xFx\quad\quad\quad\quad$ Premiss
$\forall x(Fx \to Gx)\quad$ Premiss
$Fx\quad\quad\quad\quad\quad$ From 1., Univ. Instantiation
$(Fx \to Gx)\quad\quad$ From 2., Univ. Instantiation
$Gx\quad\quad\quad\quad\quad$ From 3, 4 by MP
$\forall xGx\quad\quad\quad\quad$ From 5, Univ. Generalization
This is a superficial difference, however: the role of the free (unbound) variable is as a parameter. And we've seen, even in informal arguments, we use expressions like "s/he" or even "Alice" as parameters. Looked at in that light, there should be no mystery about why we need parameters in a formal logic too. And in one syntax, unbound variables play the role of parameters.
Solution 2:
Categorical logic implies a very natural way to think of free variables: they're just another kind of element.
There is a notion of a "generalized element" of an object $X$ in a category, which is simply any morphism whose codomain is $X$. Ordinary elements of sets can be viewed as generalized elements, by identifying an element $a \in X$ with the function
$$ f : \{ \varnothing \} \to X : \varnothing \mapsto a $$
The identity map on $X$, interpreted as a generalized element, turns out to be an excellent way to capture the notion of an "indeterminate variable" or a "generic element".
When we have a collection of free variables $x_1, \ldots, x_n$ of our domain of discourse, and we make an interpretation into the set $U$, we interpret them as the generalized elements corresponding to the projection maps $U^n \to U$.
We can correspondingly interpret the relation $R$ as being a particular map
$$ R : U^2 \to \{ \text{true}, \text{false} \} $$
and $R(x,y)$ is the corresponding generalized element of $\{ \text{true}, \text{false} \}$.
Solution 3:
If one does not consider formulas with free variables, then one cannot write a proof by induction on the formation of formulas. Every first-order formula is constructed from primitive formulas by either putting a quantifier in front of a formula with a free variable, thereby binding the variable, or applying operations like "and", "or", "not", "if...then...", etc., to other formulas. In the inductive step in a proof by induction you show that if the thing you're trying to prove is true of two formulas $A$ and $B$ then it's true of $[A\text{ or }B]$ and it's true of $\exists x\ B$, and so on.
Łos's theorem says that if a first-order statement is true in "almost every" factor in an ultraproduct, then it's true in the ultraproduct. Łos's theorem is proved by induction on the formation of first-order formulas.