What is the meaning of set-theoretic notation {}=0 and {{}}=1?
Solution 1:
There are two points here:
-
There are two ways to extend the idea that $0=\varnothing$ and $1=\{\varnothing\}$. Either by taking $n+1=\{n\}$, or by taking $n+1=n\cup\{n\}$ (where $n+1$ really just means "the successor of $n$", I know that we haven't defined addition yet.)
The former was used by Zermelo, originally, and the latter by von Neumann, and it is the modern standard for representing the natural numbers with sets. Note that in this representation, the Cartesian product almost defines the product. While $2\times 4$ is not exactly $8$, we can tell that it is $8$ by saying that $m\cdot n=k$ if and only if there is a bijection between the sets $m\times n$ and $k$.
-
In either case, not every multiplication has to be the Cartesian product. There are other definitions that we can use. Moreover, not every notion of product has to coincide with any other "naturally occurring" notion of product.
More specifically, given a set which represents the natural numbers, the product is simply a function, taking two variables as input, returning a natural number (or an object representing one). This function can be pretty much anything, as long as it satisfies certain basic properties. As noted above, in the von Neumann interpretation of natural numbers as sets we have a fairly easy way for defining the multiplication.
Finally, let me add that we don't "think that a number is a set". We can use set theory to interpret many, if not all, mathematical theories. This has foundational benefits. Since it means that as long as you believe that set theory is consistent, everything that you can build within it is consistent. In particular the natural numbers, the real numbers, and so on. Set theory is a natural choice, since we already use sets so much anyway. So in set theory we really reduce the number of types of objects that we worry about (from a foundational point of view. This is like how no matter what type of variable you use in C++, the computer ends up interpreting this as a binary string of electric current, and not as physically different objects).
Solution 2:
The definitions in question seem to be for the von Neumann ordinals. By definition, each ordinal is the set of its predecessors, so (for example):
$$0=\{\}\\1=\{0\}=\{\{\}\}\\2=\{0,1\}=\{\{\},\{\{\}\}\},$$ and so on. (Note that the definition of $2$ is not the same as the one you inferred.)
Wikipedia's article on ordinal arithmetic is pretty straightforward and understandable. Take a look and see what you think. If you have specific questions, feel free to let me know.
Solution 3:
I can't fully agree with @BadZen. Yes, in practice most "working" mathematicians rarely worry about fundamental issues like these and the "natural" numbers seem "obvious" to us. (Insert famous Kronecker quote here.) But the history of analysis has shown that a clear definition of the real numbers was needed before rigorous proofs for various basic results could be given. (Was this rigor really necessary? Well, even a genius like Euler made some embarrassing mistakes which nowadays every undergraduate could spot. Not because he was dumb, but because some of the things he dealt with simply lacked a clear definition.)
So, historically it wasn't that the "sticky statements" came out of the blue and then just for fun a lot of clever guys stopped working on "serious" math in order to deal with foundations for half a century, it was the other way around: It turned out that the "naive" way of dealing with sets which was for example used to introduce notions like Dedekind cuts led to inconsistencies and threatened to kill the whole project. That's why people came up with ZF and so on. (And then came Gödel. But that's another story.)
Not only did Kronecker more than hundred years ago claim that $\pi$ doesn't exist, even today there are serious mathematicians who don't "believe" in irrational numbers and/or infinite sets, so these issues are far from settled.
And, while I'm at it, is it really so "obvious" what a natural number is? You "know" what symbols like "4", "IV", "four", or maybe "vier" denote, but have you ever seen such a thing? You have an abstract concept of "fourness" (4 apples, 4 cars, 4 planets), but does that imply that "4" exists and what would existence actually mean in this case? Is the existence of "4" more obvious than the existence of "$\pi$"?
There's, BTW, a very nice new book about questions like these (and their history) from John Stillwell called The Real Numbers - highly recommended (like all of Stillwell's books).
Solution 4:
So, historically, because of various sticky statements that we can write down that make no sense (Russell's paradox, Cantor paradox, etc), we have felt a need to formally justify our use of all symbols in mathematical statements, to give them well-defined, formal meanings, as if they were elements of computer programs.
Numbers and sets provide unique challenges if we try to do this, because they are /integral to the very basic concepts of proof itself/.
The way that you resolve these challenges is a formal set theory, wherein you get statements like {}=0. Think of it like "0" is defined as {} for our purposes - we are starting with a world where the symbol '0' has no meaning, and proceeding onwards to do... the entire rest of mathematics, with nothing a formal set of rules and some axioms (statements assumed 'true').
The construction of the natural numbers you give is by no means the only one possible! So the statement "0 = {}" is not a universal, given, derivable, etc... thing. It is just one way of reconciling the concepts of numbers and sets with regard to a formal system of logic where we can avoid some of the "nonsense statements" that have caused logicians and mathematicians grief in the past. There are others! Studying the extends to which these models are equivalent / apply to each other in the presence of various sets of axioms is a discipline of logic with a rich body of literature.
You really needn't worry about it on some "fundamental" or intuitionistic level, unless you are interested in that sort of logic.
[edit - I don't have enough "reputation" to comment on Frunobulax's post below (bombs won't stop him; we're going to have to use NUCLEAR FORCE! :) ), but breifly, yes I mostly agree with what he is saying - the development of the formalism was/is not a "tangential" activity - there is no good reason to believe people and people are talking about the same, consistent, mathematics without it, instead of some kind of pre-Babel situation!
The real numbers are somewhat different than the natural numbers however, as these are not so intimately related to the ideas of proof (ie. the metalogic for all of the mathematics including the domain of reals can be expressed without reals for most every non-exotic/non-contrived model, etc) This touches on the "non-belief" in real numbers... although I don't think anyone doesn't believe in /countably infinite/ sets, as again there are certain proofs that require rather... complex... proof systems without them.
This is admittedly a sort of a nitpick, however...]
Solution 5:
You learned when you were young that 0 is zero. But that's silly: 0 is a circle drawn on a computer screen, not a number!!
What you really learned is that 0 is a way to represent the number zero in a visual medium.
It is useful to have ways to represent numbers in a set-theoretic medium. The notation scheme typically used is to represent a natural number as the set of all smaller numbers; e.g. we represent five as $\{ 0, 1, 2, 3, 4 \}$. '3' here, of course, means our representation of three, which is $\{0,1,2\}$. The first few numbers in this representation if we really need to fully spell them out is
- 0 = {}
- 1 = {{}}
- 2 = {{}, {{}}}
- 3 = {{}, {{}}, {{}, {{}}}}
- 4 = {{}, {{}}, {{}, {{}}}, {{}, {{}}, {{}, {{}}}}}