Are the natural numbers implicit in the construction of first-order logic? If so, why is this acceptable?

I think there are two (very interesting) questions here. Let me try to address them.

First, the title question: do we presuppose natural numbers in first-order logic?

I would say the answer is definitely yes. We have to assume something to get off the ground; on some level, I at least take the natural numbers as granted.

(Note that there's a lot of wiggle room in exactly what this means: I know people who genuinely find it inconceivable that PA could be inconsistent, and I know people who find it very plausible, if not likely, that PA is inconsistent - all very smart people. But I think we do have to presuppose $\mathbb{N}$ to at least the extent that, say, Presburger arithmetic https://en.wikipedia.org/wiki/Presburger_arithmetic is consistent.)

Note that this isn't circular, as long as we're honest about the fact that we really are taking some things for granted. This shouldn't be too weird - if you really take nothing for granted, you can't get very much done https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles. In terms of foundations, note that we will still find it valuable to define the natural numbers inside our foundations; but this will be an "internal" expression of something we take for granted "externally." So, for instance, at times we'll want to distinguish between "the natural numbers" (as defined in ZFC) and "the natural numbers" (that we assume at the outset we "have" in some way).

Second question: Is it okay to view first-order logic as a kind of "proxy" for clear, precise natural language?

My answer is a resounding: Sort of! :P

On the one hand, I'm inherently worried about natural language. I don't trust my own judgment about what is "clear" and "precise." For instance, is "This statement is false" clear and precise? What about the Continuum Hypothesis?

For me, one of the things first-order logic does is pin down a class of expressions which I'm guaranteed are clear and precise. Maybe there's more of them (although I would argue there aren't any, in a certain sense; see Lindstrom's Theorem https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s_theorem), but at the very least anything I can express in first-order logic is clear and precise. There are a number of properties FOL has which make me comfortable saying this; I can go into more detail if that would be helpful.

So for me, FOL really is a proxy for clear and precise mathematical thought. There's a huge caveat here, though, which is that context matters. Consider the statement "$G$ is torsion" (here $G$ is a group). In the language of set theory with a parameter for $G$, this is first-order; but in the language of groups, there is no first-order sentence $\varphi$ such that [$G\models\varphi$ iff $G$ is torsion] for all groups $G$! This is a consequence of the Compactness Theorem for FOL.

So you have to be careful when asserting that something is first-order, if you're working in a domain that's "too small" (in some sense, set theory is "large enough," and an individual group isn't). But so long as you are careful about whether what you are saying is really expressible in FOL, I think this is what everyone does to a certain degree, or in a certain way.

At least, it's what I do.


There are three inter-related concepts:

  • The natural numbers
  • Finite strings of symbols
  • Formulas - particular strings of symbols used in formal logic.

If we understand any one of these three, we can use that to understand all three.

For example, if we know what strings of symbols are, we can model natural numbers using unary notation and any particular symbol serving as a counter.

Similarly, the method of proof by induction (used to prove universal statements about natural numbers) has a mirror image in the principle of structural induction (used to prove universal statements about finite strings of symbols, or about formulas).

Conversely, if we know what natural numbers are, we can model strings of symbols by coding them as natural numbers (e.g. with prime power coding).

This gives a very specific argument for the way in which formal logic presupposes a concept of natural numbers. Even if we try to treat formal logic entirely syntactically, as soon as we know how to handle finite strings of symbols, we will be able to reconstruct the natural numbers from them.

The underlying idea behind all three of the related concepts is a notion that, for lack of a better word, could be described as "discrete finiteness". It is not based on the notion of "set", though - if we understand what individual natural numbers are, this allows us to understand individual strings, and vice versa, even with no reference to set theory. But, if we do understand sets of natural numbers, then we also understand sets of strings, and vice versa.

If you read more classical treatments of formal logic, you will see that they did consider the issue of whether something like set theory would be needed to develop formal logic - it is not. These texts often proceed in a more "finitistic" way, using an inductive definition of formulas and a principle of structural induction that allows us to prove theorems about formulas without any reference to set theory. This method is still well known to contemporary mathematical logicians, but contemporary texts are often written in a way that obscures it. The set-theoretic approach is not the only approach, however.


For the title question, I would say the answer is no (sort of). To define the language of first-order logic, you only need to talk about finite lists of signs drawn from a finite alphabet. For example, the alphabet might comprise the the 36 signs $a,b,c,\ldots,x, y, z,0,1, \ldots, 9$ together with a sign that I'll call a "space". You certainly don't need to define the notion "countably infinite" to ensure a countably infinite supply of variable symbols: you can just ensure that when you define how sequences of signs encode language constructs: e.g., by saying that a variable comprises a letter followed by a non-zero digit followed by a sequence of digits terminated by a space.

But having said that, to reason about syntax, you will need to reason by induction, and the dividing line between reasoning about sequences of signs and reasoning about natural numbers becomes a very fine one.

(See https://en.wikipedia.org/wiki/Formal_system#Logical_system for some more information on this kind of thing. For good reasons, logic texts often gloss over the details, as all that really matters is that formulas in the language can be represented as finitary objects.)