Calculus over $\mathbb{Q}$
The mismatch between the sensitivity of 'mathematical calculus' and the flexibility of 'real world calculus' has been bothering me a bit recently. What I mean is this: in the real world, I can trust that calculus will work whether $\mathbb{R}$ is "actually real" or not. "Continuity" can be considered as relative to a certain scale, and this is good enough for many purposes. Mathematically of course, this is not the case. Weakening even to $\mathbb{Q}$ will cause, say, the intermediate value theorem to fail. But I guess what I'm really wondering is whether or not this is just a problem with the chosen definitions involved, and if a clean formalism exists that more accurately captures how/why calculus is so widely applicable in real-world problems. And so I come to my main questions:
- If we replace the idea of exactly hitting a number (as used in the IVT, EVT, and MVT) with "getting arbitrarily close to it", can we still cleanly and/or consistently develop calculus? It seems like $\mathbb{Q}$ could support some form of calculus with this, since as far as I can see the whole '$\forall\epsilon\exists\delta$' paradigm is left intact. The normal counterexample to the IVT in $\mathbb{Q}$ is given by asking for roots of $y=x^2-2$, but this would be avoided because $y$ gets arbitrarily close to $0$. Moreover, restricting to smaller and smaller intervals where $y$ gets arbitrarily close to $0$ can also show that it doesn't get arbitrarily close to any other number "at the same time". Could this be considered continuous?
- Is there a form of calculus that deals with finite error without actually lugging explicit error terms around (i.e. a 'fuzzy calculus')? As in "any differences less than the tolerance $\epsilon_0$ are ignored"? I suppose this is somewhat related to nilpotent infinitesimals/dual numbers and big O notation, but since these still consider numbers of infinite precision they aren't quite what I'm looking for. I'd imagine that there's no way of doing this without getting into the same ugly issues of floating point numbers, but I figure it doesn't hurt to ask.
Solution 1:
In general, this is more of an question on the epistemology of mathematics. Einstein answered this with "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."
But more on the question itself: The key difference is the completeness axiom. That is, any set of real numbers has a lowest upper bound. This doesn't hold for the rationals. For example, the set $\{x \in Q : x^2 < 2\}$ doesn't have a lowest upper bound in Q. This axiom underlies (almost?) everything in real analysis.