Why can't calculus be done on the rational numbers?
$\newcommand{\QQ}{\mathbb{Q}}$ Derivatives don't really go wrong, it's antiderivatives. (EDIT: Actually, the more I think about it, this is just a symptom. The underlying cause is that continuity on the rationals is a much weaker notion than continuity on the reals.)
Consider the function $f : \QQ \to \QQ$ given by $$f(x) = \begin{cases} 0 & x < \pi \\ 1 & x > \pi \end{cases}$$
This function is continuous and differentiable everywhere in its domain. If $x < \pi$, then there's a neighborhood of $x$ in which $f$ is a constant $0$, and so it's continuous there, and $f'(x) = 0$. But if $x > \pi$, there's a neighborhood of $x$ in which $f$ is a constant $1$, so it's continuous there too, and $f'(x) = 0$ again.
So the antiderivatives of $0$ can look rather messy. By adding functions like this, you can construct arbitrarily "jagged" functions with zero derivative. As you can imagine, this completely destroys the Fundamental Theorem of Calculus, and any results that follow from it.
This can happen in the real line to some extent, but it's not nearly as bad. The traditional antiderivative of $1/x$ is $\ln|x| + C$. But so is the following function: $$ g(x) = \begin{cases} \ln x + C_1 & x > 0 \\ \ln(-x) + C_2 & x < 0 \end{cases} $$
By changing $C_1$ and $C_2$, we can push the two halves of the real line around completely independently. This is only possible because $1/x$ isn't defined at $0$, and so we've "broken" the real line at that point.
If you like dumb physical metaphors, here's one:
The real line is kind of like an infinite stick. If you wiggle a section of it, the whole thing must move.
With the $1/x$ example, you've made a cut at $x = 0$, and now you have two half-sticks. They can be wiggled independently, but each half must still move as a unit.
The rational numbers are more like a line of sawdust. You can't really move one grain by itself, but you can certainly take an interval and move it around independent of its neighbors.
By completing the rationals, you're adding all the glue between the grains to form a stick again. (I hope no one from diy.stackexchange is reading this...)
This is a slightly softer answer.
You can 'do calculus' in-so-far as you can define the derivative and perhaps compute some things. But you'll get no theorems out: the main interval theorems (the Intermediate Value Theorem and the Extreme Value Theorem) rely heavily on the fact that the real numbers are complete, which the rationals aren't. In fact, the 'Intermediate Value Property' is equivalent to the completeness of the reals, and I'm pretty sure the 'Extreme Value Property' is as well.
Going on from there, Rolle's Theorem depends on the Extreme Value Theorem, the Mean Value Theorem depends on Rolle's Theorem, and Taylor's Theorem depends on the Mean Value Theorem.
Going in a different direction, L'Hopital's Rule is typically proven using the Cauchy Mean Value Theorem, which of course depends on the Mean Value Theorem. I don't know if there's some way to prove L'Hopital's Rule without this dependence, but I expect that if it's possible then the proof will depend crucially on completeness.
Above all, much of the usefulness (and beauty) of calculus comes from the theorems mentioned above. So while you can set up the usual definitions in non-complete spaces, and you may even be able to get some partial results, you eventually reach the question: is this really worth calling 'calculus'? It certainly doesn't compare to the real-variable theory.
This is all not-to-mention the lack of a meaningful theory of integrals, which is detailed in other answers.
Addendum: Consider the 'rational complex numbers' $\mathbb{Q}[i]$. If you like, you can extend your rational calculus to $\mathbb{Q}[i],$ but I don't think it will bear much resemblance to complex analysis. As a first example, the fact that satisfaction of the Cauchy-Riemann equations, along with continuity of partial derivatives of real and imaginary parts, implies complex differentiability of a complex function at a point depends, at least in the proof I've seen, on the Mean Value Theorem.
It seems like most people are talking about integrals; let me answer with the first thing that popped into my head about derivatives.
$$\lim_{x\rightarrow a}f(x)=L\leftrightarrow(\mid x-a\mid<\epsilon\leftrightarrow\mid f(x)-L\mid<\delta) $$
While this may be a fine definition for rational-restricted limits, it winds up saying nothing about non-rational limits. Which are almost all of them. For example, the sequence {1, 1.4, 1.41...} approaching $\sqrt{2}$ has no limit according to this (rational-restricted) definition.
Likewise if we seek to use the definite integral as the area under some curve, then we find that the rationals are a set of measure zero and therefore the area over that set is automatically zero.
In short, because rationals are a set of measure zero, they account for almost none of the limits on rational functions or sequences, and no area under any curve of interest. I do find that a tiny bit of knowledge of Lebesgue measure theory makes a lot of this stuff immediately clear.
Based on your comments, I think you might be particularly interested in the theory of real closed fields, or more generally, real algebraic geometry.
There is a formal, logical sense in which all real closed fields are the "same", and two prominent examples of real closed fields are the real numbers and the real algebraic numbers.
By leveraging this "sameness", it turns out that a large fragment of calculus still works the same way if you stick to algebraic numbers and algebraically defined functions.
It's mainly notions like continuity, completeness, or differentiation that carry over well; other techniques can be developed too, but I believe they tend to follow more along the lines of algebraic geometry than that of the calculus of real numbers.