Subset of a finite set is finite

We define $A$ to be a finite set if there is a bijection between $A$ and a set of the form $\{0,\ldots,n-1\}$ for some $n\in\mathbb N$.

How can we prove that a subset of a finite set is finite? It is of course sufficient to show that for a subset of $\{0,\ldots,n-1\}$. But how do I do that?


The proof is essentially the pigeonhole principle, and it is proved by induction.

Let us denote $[n]=\{0,\ldots,n-1\}$. What we want to prove is in two parts:

  1. If $A\subseteq[n]$ then either $A=[n]$ or there is a bijection between $A$ and $[m]$ for some $m<n$.

  2. If $m<n$ then there is no bijection from $[n]$ into $[m]$. Equivalently we want to prove that if $f\colon[n]\to[n]$ is injective then it is a surjective. Also we may want to prove that if $f\colon[n]\to[m]$ then $f$ is not injective, or $f\colon[m]\to[n]$ then $f$ is not surjective.

The first part is not very difficult, we define by induction $f(0)=\min A$; and $f(k+1)=\min A\setminus\{f(0),\ldots,f(k)\}$. Either $f$ is a bijection between $A$ and $[n]$ or the induction had to stop before and $f$ is a bijection between $[m]$ and $A$ for some $m<n$. Note that this is not begging the question because $A$ is a set of natural numbers, and it is a subset of $[n]$ so it cannot contain more elements than $[n]$ (more in the actual sense, not in the sense of cardinality).

The second part is tricky because it seems so obvious. This is where the pigeonhole principle is actually proved.

We prove this part by induction. For $n=0$ this is obvious because $[0]=\varnothing$.

Assume that for $[n]$ it is true that if $f\colon[n]\to[n]$ is injective then it is surjective. We want to show this is also true for $[n+1]$.

Let $f\colon[n+1]\to[n+1]$ be an injective function. There are two cases:

  1. If $f(k)\neq n$ for all $k<n$ then restricting $f$ to $[n]$ is an injective function from $[n]$ into $[n]$. By the induction hypothesis we have to have that the restriction of $f$ to $[n]$ is surjective, therefore $f(n)\notin[n]$ (otherwise $f$ is not injective) and therefore $f(n)=n$ and so $f$ is surjective as well.

  2. If $f(k)=n$ for some $k<n$, by injectivity this $k$ is unique. We define $g$ as follows: $$g(x)\begin{cases} f(n) & x=k\\ n & x=n\\ f(x) & x\neq k,n\end{cases}$$ It follows that $g$ is also injective, and for all $k< n$ we have to have $g(k)\neq n$ (because $g(n)=n$ and $g$ is injective). Therefore by the previous case we know that $g$ is surjective. It follows that $f$ is also surjective, as wanted.


A word on the axiom of choice and finiteness. The above proof shows that finite sets are Dedekind-finite. There are other ways of defining finiteness, all which are true for finite sets, but may also be true for infinite sets. For example "every surjection is a bijection" might fail for infinite Dedekind-finite sets; or "every linearly ordered partition is finite"; etc.

Assuming the axiom of choice, or actually the axiom of countable choice, is enough to conclude, however, that all Dedekind-finite sets are finite. Therefore the above forms of finiteness are equivalent once more. (The reverse implication is false, by the way, it is consistent that the axiom of countable choice fails, but every Dedekind-finite set is finite.)


Here is a slightly different proof of the pigeonhole principle:

Suppose $f:[n]\to[n]$ is an injection that is not surjective for $n>0$. We can assume without loss of generality that $n$ is not in the range of $f$. For if it is, we map the element that is mapped to $n$ to some element not in the range instead and get a new injection that is not surjective with $n$ not in the range. We now show that $f|[n-1]$ is injective, but not surjective. Trivially, the restriction of an injection is an injection. But $f(n)\in[n-1]$ is not in the range of $f|[n-1]$.


Let $I_n = \{0, 1, ... n-1\}$, and $B\subset A$. Then there exists an injection from $B\to I_n$ since there's an bijection from $A\to I_n$. Since there is such a $n$ there exists(*) a minimal $j$ such that there is an injection from $B\to I_j$ and if that were not an surjection there we could construct a injection from $B\to I_{j-1}$ (unless $j=0$, but then $I_j=\emptyset$ and the function is trivially a surjection).


If you instead use Dedekind-finite as a definition for finite. That is that every injective function on $A$ is surjective, then it becomes rather obvious.

If $B\subseteq A$ were not finite there would be a injective mapping from $B$ to $C\subsetneq B$ and we could expand this to $A$ by taking the identity map outside $B$ then we would have an injective mapping from $A$ to a proper subset of $A$ which means that $A$ is infinite which contradicts the assumption.


(*) This relies on that every non-empty set $X$ (of $j$ such that there is an injection $B\to I_j$) of natural numbers has a minimal element which can be proven by induction.


For any $n\in N$ let $[n]$ represent the set $\{0,\ldots,n-1\}$.

Proposition: For all natural numbers $n$, all subsets of $[n]$ are finite.

We prove it by induction on $n$. For $n=0$ the only subset of $[0]$ is $\emptyset$ which is finite.
Now assume that all subsets of $[n]$ are finite. Let $A$ be a subset of $[n+1]$.
If $n \notin A$ then $A \subseteq [n]$ and $A$ is finite by the induction hypothesis.
If $n \in A$ then $A-\{n\}\subseteq [n]$ and is finite by the induction hypothesis, which means there is a bijection $f$ from $[k]$ to $A-\{n\}$ for some natural number $k$. Define a bijection $g \colon [k+1] \to A$ as follows:

$$ g(x)= \begin{cases} f(x), & \text{if $x<k$} \\ n, & \text{ if $x=k$} \end{cases} $$

Since there is a bijection from $[k+1]$ to $A$, $A$ is finite. Thus all subsets of $[n+1]$ are finite.