Over the course of my studies I often encounter phrases in reference material of the type "and this avoids the need for using $\epsilon$, $\delta$ definitions" or "by this we can omit those complicated $\epsilon, \delta$ arguments", etc. In other words performing stunts in order to get around $\epsilon, \delta$. I've seen enough of this to think that it should be categorized as epsilondeltophobia, if you all will permit. Personally, I was thrilled to learn definitions in these terms because it was one of the first rigorous definitions given to me, all in terms of quantifier logic, and it was used for very fundamental things whose real meaning I always wondered about. In the beginning of course I didn't have a clue how to use the language, but I loved it anyways because it was like, "wooow, deep maan". Not to mention that later on, I began to see that all of the higher-order constructions that were built upon $\epsilon, \delta$-objects worked out perfectly, giving me more satisfaction that whoever came up with $\epsilon, \delta$ language knew what they were doing. So I'm not saying that it's not ok to develop an epsilondeltophobia, as we all do naturally in the beginning...but textbooks (some) seem to promote this fear, even some teachers, and this is what I'm not happy about. I think $\epsilon, \delta$ is great.

Question: who thinks likewise? oppositely?

Edit: I don't want this to come off as a pedantic "rigor or death" statement, or as a suggestion that first courses on calculus should always include $\epsilon, \delta$ (although maybe yes in mathematics). I'm just against the predisposal to it in a negative way.


I believe that the pushback against $\epsilon,\delta$ definitions (which unfortunately spills over to pushback against $\epsilon,\delta$ techniques) is entirely justified because $\epsilon,\delta$ definitions arise from the (unfortunately widespread) confusion between a statement being formal and a statement being rigorous.

Consider the formal "definition" of continuity of a function $f$ at a point $a$: $$\forall\epsilon\exists\delta\forall x(0<|x-a|<\delta\rightarrow |f(x)-f(a)|<\epsilon)$$ This is just an objuscated way of stating the informal, but rigorous:

For every ball $B_{f(a)}$ centered at $f(a)$, there is a ball $B_a$ centered at $a$ so that $f$ sends every point of $B_a$ into $B_{f(a)}$.

which is logically equivalent to the conceptually clearer, though still informal, though still rigorous:

Whenever the image $f(S)$ of a set $S$ is separated from the image $f(a)$ of a point $a$, the set $S$ was already separated from the point $a$.

which is the contrapositive of the, informal and rigorous, intuitive definition of continuity of $f$ at a point $a$:

Whenever a set $S$ of points are close to a point $a$, the set of images $f(S)$ of those points are close to image point $f(a)$.

I strongly believe that the equivalence of the blocked statements and the IDEA that equivalence expresses, which is that we CAN distill an intuitive notion into a rigorous definition, is much more interesting, important, and memorable, than the formal $\epsilon,\delta$ "definition". Furthermore, I can't even bring myself to calling the formal "definition" a definition, since what it expresses is not a description of what it means for a function to be continuous, but a technique (of $\epsilon,\delta$ proofs) for how to check that a function is continuous.

This, in my opinion, is the reason for the pushback against $\epsilon,\delta$ "definition" and arguments: instead of expressing the rigorous idea or concept of continuity, the $\epsilon,\delta$ "definition" only gives a technique for working with continuity, and, when presented as a definition, only obfuscates the meaning of the concept (in a very efficient way, I might add, since the path from the intuitive and meaningful definition to the $\epsilon,\delta$ definition involves taking a contrapositive...).

Finally, I do think that being aware of how to rigorously translate (as above) from the intuitive definition of continuity to the statement of the $\epsilon,\delta$ technique will certainly not hurt, and I suspect could actually help students in using the ($\epsilon,\delta$) technique, especially with the simple functions that arise in Calculus and basic analysis.

(Someone might criticize the above saying that the notion of a ball is confusing in single-variable Calculus. My perhaps controversial response is that there really isn't any good reason not to teach Calculus using $2$ or $3$ variables from day $1$ and that the narrow viewpoint offered by single-variable Calculus obscures more than it simplifies).


It turns out that engineers, scientists, and financial folks need to use calculus, but they don't need to understand calculus.

The construction of the typical university education feeds all of those students, plus math students, through the same introductory calculus courses. This is done for cost efficiency, and also because of a potentially mis-placed ideal that career mathematicians should teach mathematics to people for whom mathematics is ultimately really just an annoying means to an end.

So eliding $\epsilon - \delta$ arguments streamlines this process, saving trouble for the students and the instructors, at the expense of the math students. But those math students will encounter it later, anyways.

I'm not saying it's the best approach, but it's a bit more efficient perhaps. Mechanical engineers don't want to learn $\epsilon - \delta$, and math professors don't want to teach $\epsilon - \delta$ to students who will never truncate a Taylor series beyond the linear term.