Understanding Gödel's Incompleteness Theorem
Solution 1:
There's a fair amount of historical context to make some of Gödel's choices in his original proof clear. For a good overview of the proof, Nagel and Newman's Goedel's Proof is pretty good (though not without its detractors). I also highly recommend the late Torkel Franzen's Godel's Theorem: An Incomplete Guide to its Use and Abuse. I may myself be guilty of some of those abuses below; I'm trying to shy away from a lot of the technical details, and this usually invites imprecision and even abuse in this particular field. I hope those who know better than me will keep me honest via comments and appropriate ear-pulling.
Some history. Sometimes I think of the 19th century as the Menagerie century. A "menagerie" was like a zoo of weird animals. During a lot of the 19th century, people were trying to clean up some of the logical problems that abounded in the foundations of mathematics. Calculus plainly worked, but a lot of the notions of infinitesimals were simply incompatible with some 'facts' about real numbers (eventually solved by Weierstrass's notion of limits using $\epsilon$s and $\delta$s). A lot of assumptions people had been making implicitly (or explicitly) were shown to be false through the construction of explicit counterexamples (the "weird animals" that lead me to call it the menagerie century; many mathematicians were like explorers bringing back weird animals nobody had seen before and which challenged people's notions of what was and was not the case): the Dirichlet function to show that you can have functions that are discontinuous everywhere; functions that are continuous everywhere but nowhere differentiable; the failure of Dirichlet's principle for some functions; Peano's curve that filled-up a square; etc. Then, the antinomies (paradoxes and contradictions) in the early set theory. Even some work which today we find completely without problem caused a lot of debate: Hilbert's solution of the problem of a finite basis for invariants in any number of variables was originally derided by Gordan as "theology, not mathematics" (the solution was not constructive and involved the use of an argument by contradiction). Many found a lot of these developments deeply troubling. A foundational crisis arose (see the link in Qiaochu's answer).
Hilbert, one of the leading mathematicians of the late 19th century, proposed a way to settle the differences between the two main camps. His proposal was essentially to try to use methods that both camps found unassailably valid to show that mathematics was consistent: that it was not possible to prove both a proposition and its negation. In fact, his proposal was to use methods that both camps found unassailable to prove that the methods that one camp found troublesome would not introduce any problems. This was the essence of the Hilbert Programme.
In order to be able to accomplish this, however, one needed to have some way to study proofs and mathematics itself. There arose the notion of "formal proof", the idea of an axiomatization of basic mathematics, etc. There were several competing axiomatic systems for the basics of mathematics: Zermelo attempted to axiomatize set theory (later expanded to Zermelo-Fraenkel set theory); Peano had proposed a collection of axioms for basic arithmetic; and famously Russell and Whitehead had, in their massive book Principia Mathematica attempted to establish an axiomatic and deductive system for all of mathematics (as I recall, it takes hundreds of pages to finally get to $1+1=2$). Some early successes were achieved, with people showing that some parts of such theories were in fact consistent (more on this later). Then came Gödel's work.
Consistency and Completeness. We say a formal theory is consistent if you cannot prove both $P$ and $\neg P$ in the theory for some sentence $P$. In fact, because from $P$ and $\neg P$ you can prove anything using classical logic, it is equivalent that a theory is consistent if and only if there is at least one sentence $Q$ such that there is no proof of $Q$ in the theory. By contrast, a theory is said to be complete if given any sentence $P$, either the has a proof of $P$ or a proof of $\neg P$. (Note that an inconsistent theory is necessarily complete). Hilbert proposed to find a consistent and complete axiomatization of arithmetic, together with a proof (using only the basic mathematics that both camps agreed on) that it was both complete and consistent, and that it would remain so even if some of the tools that his camp used (which the other found unpalatable and doubtful) were used with it.
Why arithmetic? Arithmetic was a particularly good field to focus on in the early efforts. First, it was the basis of the "other camp". Kronecker famously said "God gave us the natural numbers, the rest is the work of man." It was hoped that an axiomatization of the natural numbers and their basic operations (addition, multiplication) and relations (order) would be both relatively easy, and also have a hope of being both consistent a complete. That is, it was a good testing ground, because it contained a lot of interesting and nontrivial mathematics, and yet seemed to be reasonably simple.
Gödel. Gödel focused on arithmetic for this reason. As it happens, multiplication is key to the argument (there is something special about multiplication; some theories of the natural numbers that include only addition can be shown to be consistent using only the kinds of tools that Hilbert allowed). To answer one of your questions along the way, Gödel even defined new operations and relations on natural numbers that had little to do with addition and multiplication along the way, so that yes, there are operations other than those (no fetishism about them at all). But in fact, Gödel did not restrict himself solely to arithmetic. His proof is, on its face, about the entire system of mathematics set forth in Russell and Whitehead's Principia, though as Gödel notes it can easily be adapted to other systems so long as they satisfy certain criteria (that's why Gödel's original paper has a title that explicitly refers to the Principia "and related systems").
What Gödel showed was that any theory, subject to some technical restrictions (for example, you must have a way of recognizing whether a given sentence is or is not an axiom), that is "strong enough" that you can use it to define a certain portion of arithmetic will necessarily be either incomplete or inconsistent (that is, either you can prove everything in that theory, or else there is at least one sentence $P$ such that neither $P$ nor $\neg P$ can be proven). It's not a limitation based on four operations, or an obsession with those operations: quite the opposite. What it says if that if you want your theory to include at least some arithmetic, then your theory is going to be so complicated that either it is inconsistent, or else there are propositions that can neither be proven nor disproven using only the methods that both camps found valid.
That is: what it shows is a limitation of those particular (logically unassailable) methods. If we use other methods, we are able to establish consistency of arithmetic, for example, but if you had your doubts about the consistency of arithmetic in the first place, chances are you will find those methods just as doubtful.
Now, about you coda, and the statement "statements that are true but unprovable"; this is not very apt. You will find a lot of criticism to this paraphrase in Franzen's book, with good reason. It's best to think that you have statements that are neither provable nor disprovable. In fact, one of the things we know is that if you have such a statement $P$, in a theory $M$, then you can find a model (an interpretation of the axioms of $M$ that makes the axioms true) in which $P$ is true, and a different model in which $P$ is false. So in a sense, $P$ is neither "true" nor "false", because whether it is true or false will depend on the model you are using. For example, the Gödel sentence $G$ that proves the First Incompleteness Theorem (that if arithmetic is consistent then it is incomplete, since there can be no proof of the sentence $G$ and no proof of $\neg G$) is often said to be "true but unprovable" because $G$ can be interpreted as saying "There is no proof of $G$ in this theory." But in fact, you can find a model of arithmetic in which $G$ is false, so why do we say $G$ is "true"? Well, the point is that $G$ is true in what is called "the standard model." There is a particular interpretation of arithmetic (of what "natural number" means, of what $0$ means, of what "successor of $n$" means, of what $+$ means, etc) which we usually have in mind; in that model, $G$ is true but not provable. But we know that there are different models (where 'natural number' may mean something completely different, or perhaps $+$ means something different) where we can show that $G$ is false under that interpretation of the axioms. I would stay away from "true" and "false", and stick with "provable" and "unprovable" when discussing this; it tends to prevent problems.
First Summary. So: there were historical reasons why Gödel focused on arithmetic; the limitation is not of arithmetic itself, but rather of the formal methods in question: if your theory is sufficiently "strong" that it can represent part of arithmetic (plus it satisfies the few technical restrictions), then either your theory is inconsistent, or else the finitistic proof methods at issue cannot suffice to settle all questions (there are sentences $P$ which can neither be proven nor disproven).
Can something be rescued? Well, ideally of course we would have liked a theory that was complete and consistent, and that we could show is complete and consistent using only the kinds of logical methods which we, and pretty much everyone else, finds beyond doubt. But perhaps we can at least show that the other methods don't introduce any problems? That is, that the theory is consistent, even if it is not complete? That at least would be somewhat of a victory.
Unfortunately, Gödel also proved that this is not the case. He showed that if your theory is sufficiently complex that it can represent the part of arithmetic at issue (and it satisfies the technical conditions alluded to earlier), and the theory is consistent, then in fact one of the things that it cannot settle is whether the theory is consistent! That is, one can write down a sentence $C$ which makes sense in the theory, and which essentially "means" "This theory is consistent" (much like the Gödel sentence $G$ essentially "means" that "there is not proof of $G$ in this theory"), and which one can prove that if the theory is consistent then the theory has no proof of $C$ and no proof of $\neg C$.
Again, this is a limitation of those finitistic methods that everyone finds logically unassailable. In fact, there are proofs of the consistency of arithmetic using transfinite induction, but as I alluded to above, if you harbored doubts about arithmetic in the first place, you are simply not going to be very comfortable with transfinite induction either! Imagine that you are not sure that $X$ is being truthful, and $X$ suggests that you ask $Y$ about $X$s truthfulness; you don't know $Y$, but $X$ assures you that $Y$ is very trustworthy. Well, that's not going to help you, right?
Key take-away: Because the theorem applies to any theory that is sufficiently complex (and satisfies the technical restrictions), we are not even in a position of enlarging our set of axioms to escape these problems. So long as we restrict ourselves to enlargement methods that we find logically unassailable, the technical restrictions will still be satisfied, so that the new theory, stronger and larger though it will be because it has more axioms, will still be incomplete (though it will possibly be other sentences that are now incomplete or unprovable; remember that by adding axioms, we are also potentially expanding the kinds of things about which we can talk). So the theorems are not about shortcomings of particular axiomatic systems, but rather about those finitistic methods within a very large class of systems.
What about those 'technical restrictions'? They are important. Suppose that arithmetic were consistent. That means that there is at least one model for it. We could pick a model $M$, and then say "Let's make a theory whose axioms are exactly those sentences that are true when interpreted in $M$." This is a complete and consistent axiomatic system for arithmetic. Complete, because each sentence is either true in $M$ (and hence an axiom, hence provable in this theory) or else false in $M$, in which case its negation is true (and hence an axiom, and hence provable). And consistent, because it has a model, $M$, and a theory is consistent if and only if it has a model. The problem with this axiomatic theory is that if I give you a sentence, you're going to have a hard time deciding if it is or it is not an axiom! We didn't really achieve anything by taking this axiomatic system. The "technical restrictions" are both in the form of making the system actually usable, and also certain technical issues that arise from the mechanics of the proof. But the restrictions are mild enough that pretty much everyone agrees that most reasonable theories will likely satisfy them.
Second summary. So: if you have a formal axiomatic system which satisfies certain technical (but mild) restrictions, if the theory is large enough that you can represent (a certain part of) arithmetic in it, then the theory is either inconsistent, or else the finitistic methods that everyone agrees are unassailable are insufficient to prove or disprove every sentence in the theory; worse, one of the things that the finitistic methods cannot prove or disprove is whether the theory is in fact consistent or not.
Hope that helps. I really recommend Franzen's book in any case. It will lead you away from potential misinterpretations of what the theorem says (I am likely guilty of a few myself above, which will no doubt be addresssed in comments by those who know better than I do).
Solution 2:
The incompleteness theorem is much more general than "arithmetic and its four operators." What it says is that any effectively generated formal system which is sufficiently powerful is either inconsistent or incomplete. This requires that the system be capable of talking about arithmetic, but it can also talk about much more than arithmetic.
To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd
Then you should study harder! Mathematics is not beholden to your intuition. When your intuition clashes with the mathematics, you should assume that your intuition is wrong. (I am not trying to be harsh here, but this is an attitude you need to digest before you can really learn any mathematics.)
You also shouldn't think of arithmetic as just being about the arithmetic operations: it's also about the quantifiers.
As a follow-up remark, why the obsession with arithmetic operators?
Who claims to have an obsession with arithmetic operators? Perhaps what you are missing here is historical context. The historical context is, roughly, this: back in Gödel's day there was a program, initiated by Hilbert, which sought to give a complete set of axioms for all of mathematics. That is, Hilbert wanted to write down a set of (consistent) axioms from which all mathematical truths were deducible. (To really understand why Hilbert wanted to do this you should read about the foundational crisis in mathematics). This was very grand and ambitious and many people had great hopes for Hilbert's program until Gödel destroyed those hopes with the Incompleteness theorem, which shows that Hilbert's program could not possibly succeed: if the axioms are powerful enough to talk about arithmetic, then they are either inconsistent (you can prove false statements) or incomplete (you can't prove some true statements).
Solution 3:
Before you understand Gödel's incompleteness theorem, you need to know what a formal system is. A formal system can be thought of as a very strict language where the grammar completely determines the allowable sentences. So, the incompleteness theorem is a statement that "In a formal system sufficiently strong to represent arithmetic, there are sentences which are true but unprovable" which is, to use your phrase, a "limitation on provability (in a formal system)".
You asked "Godel's theorem is proved based on Arithmetic and its four operators: is all mathematics derived from these four operators (×, +, -, ÷) ?"
Answer: No, there is mathematics that does not use those operators. You also are claiming that multiplication and division are inverses on the natural numbers but that is not true. 7/4 is not an allowed division on the natural numbers, it is only when you allow fractions that this is possible.
You asked "There must be operators that are consistent on the natural numbers that we certainly aren't aware of, no?"
Answer: Depends on what you mean by "operator". In mathematics we generally think of an "operator" on natural numbers as a function from a pair of natural numbers to another natural number, so there are in fact infinitely many operators on natural numbers. So yes, there are some we "aren't aware of" because there are more than we could ever count. For example, let's invent the "square product operator" (let's give it the symbol ¤) as taking two numbers to the product of their squares, so 2 ¤ 3 = (2*2)*(3*3) = 36. The possibilities are limitless!
Solution 4:
This is a personal recommendation.
Peter Smith's "An Introduction to Gödel's Theorems" is very readable and self-contained.
I believe to fully appreciate Gödel's theorems it is a good idea to learn some computability theory before (Smith does include some of this background in his book, but a bit more of perspective doesn't hurt).
Two very good sources for beginners are Cutland's "Computability" and Boolos and Jeffrey "Computability and Logic". There's a forthcoming book from Enderton (who unfortunately recently passed away), "Computability Theory", which I have been able to read and highly recommend too. For a quick (and unusually clear) introduction to computability theory you can take a look at Enderton's notes, which, by the way, are an excerpt from his book:
http://www.math.ucla.edu/~hbe/computability.pdf
I hope this helps (it did to me).
Edit: Let me also recommend a very nice book by Melvin Fitting, Incompleteness in the Land of Sets. As the title suggests, the author shows us how to discuss incompleteness in its most natural place: the land of hereditarily finite sets.