Why do first order languages have at most countably many symbols?

Every proof that I read seems to assume that $|L|\leq\aleph_0$. But then how do you model things like field over $\mathbb{R}$ without running out of variable symbols?

More importantly, how can I prove that $|L|\leq\aleph_0$?


Solution 1:

You can have as many symbols as you like in your language! For instance, with a possibly uncountable language $L$, the Lowenheim-Skolem theorem becomes:

If $\mathcal{M}$ is an $L$-structure, then there is an elementary substructure $\mathcal{N}\preccurlyeq\mathcal{M}$ of cardinality at most $\aleph_0\cdot\vert L\vert$.

Some authors choose to restrict attention to countable languages for simplicity; I think that's a terrible decision most of the time, since it can lead exactly to your confusion.

That said, there are situations where it does matter that the language be countable. For example:

  • Morley's theorem only applies to theories in countable languages. I can have a theory $T$ in an uncountable language $L$ which is $\omega_2$-categorical but not $\omega_1$-categorical. And in fact, extending Morley's theorem to uncountable languages is very nontrivial.

  • Computable structure theory really only works when the language is countable, since everything has to be coded by natural numbers.

But yes, except in rare occasions (at least, they seem rare to me) there is no need to restrict attention to countable languages; and texts which do restrict attention to countable languages just to simplify things should state this extremely explicitly to avoid confusion.

Solution 2:

This is not true precisely for the reason you stated. For example, to apply the compactness theorem so as to prove the existence of a hyperreal extension of the real numbers, Abraham Robinson exploited a language with uncountably many symbols.