(Finitely) decimal expressible real numbers between [0,1] countable?

Solution 1:

Your intuition is correct: the set of "decimal expressible" real numbers in [0, 1] such that the decimal expression has only finitely many nonzero digits is countable, and the "injective function" you gave works.

However, if you allow decimal expressions with an infinite number of decimals (e.g. 0.333...), then this no longer works. First of all, which natural number are you going to associate with 0.333...? In fact, if you allow decimal expressions with an infinite number of decimals, then the set you described becomes uncountable; for the justification of that, I'll point you to Cantor's Diagonal Argument.

To answer your three questions:

  1. Yes.

  2. Yes, all you needed to do is associate every element of your set with an element of the natural numbers (and vice versa), which you've done. If I give you a decimal expression with finitely many terms, you can give me the unique natural number that you've associated with that decimal expression, and if I give you a natural number, you can give me the unique (finite) decimal expression it corresponds to. This is exactly what it means for a set to be countable.

  3. No; again, see Cantor's Diagonal Argument.

Solution 2:

Yes, that works for terminating decimals (which does not include all rationals). More generally, it's simpler to note that the rationals are countable by mapping the reduced rational $\rm\ m/n\ \to\ 2^m\ 3^n\:.$ The same idea easily extends from pairs of naturals to all eventually zero sequences of naturals. Such arguments cannot be extended to all reals since the well-know diagonalization argument due to du-Bois-Reymond (and, later, Cantor) easily proves that that they are uncountable.

Solution 3:

When I was taking my first course in set theory I asked my prof. (now my advisor) the same thing really.

First of all, the function is not well-defined, since there are numbers with several different decimal expansions (for example $\frac{1}{2} = 0.49999\ldots = 0.5$).

Let us assume that we only take the shortest finite expansion, it is possible but only for a very small portion of the numbers, for example $\frac{1}{3}$ has no finite decimal expansion, despite being a very simple rational number. So in fact, you don't really use even all the rational numbers in the interval.

Now assume that a number $x$ has a finite representation at all, that means that for some $n\in\mathbb{N}$ we have that $x$ has $n$ digits after the decimal point, namely $x\cdot 10^n\in\mathbb{N}$, so you only define your function on a small portion of the rational numbers in the interval $[0,1]$.

As pointed by Derek Jennings, some (in fact almost all) numbers doesn't have a finite decimal representation (and actually most numbers has no finite representation in any base you can choose), so even if you do define this function it will only cover a countable subset of the interval, whereas the interval itself is uncountable and much much bigger than $|\mathbb{N}|$.

So to answer your question, yes it is countable and your function (after correcting the minor problem of the uniqueness in the expression) is a proof for that.