Why can't epsilon depend on delta instead?

When presented with $\lim_{x\to a}f(x) = L$, we are usually taught to intuitively think of $x$ approaching the value $a$ from both sides, with $f(x)$ getting closer and closer to the value $L$. For example, to guess the value of $\lim_{x\to 3}(x+3)$, we plug in $2.9$, $2.999999$, or $3.01$, $3.00000001$, and see what happens. Or we draw a graph. This was in high school calculus.

However, when rigorously proving that a limit exists, the notion of 'getting closer and closer to a value' is replaced with $\epsilon$-$\delta$ language. Intuitively, given $\epsilon>0$, no matter how small your 'strip' is around $L$, if I can always find a corresponding strip which ensures that the values of $f(x)$ will be within the strip around $L$, then I've proven that the limit exists.

The rigorous definition requires that $\epsilon$ be given first. This makes sense. But if we challenge someone with $\delta$, and if our opponent fails to provide an $\epsilon$ so that $0<|x-a|<\delta$, wouldn't that prove that the limit doesn't exist? Why can't limits be defined this way instead of the other way round? I think this is more natural, because in the intuitive definition, we vary $x$ and observe what happens to $f(x)$. Suddenly, in the rigorous definition, we do the reverse: Pick values around $L$ and observe whether there are $x$'s which map to those values.

What is wrong with my reasoning?


I think its more a matter of making language precise and remove any confusion. You are right that an informal meaning of $\lim_{x \to a}f(x) = L$ is that if the values of $x$ are near $a$ then values of $f(x)$ are near $L$.

How do we make this statement precise? We do that by quantifying the word "near". So the closeness of $x$ with $a$ is managed by number $\delta$ and closeness of $f(x)$ with $L$ is measured by $\epsilon$. We could have chosen any symbols say $A$ and $B$ instead of $\epsilon,\delta$ but choosing greek symbols gives you an air of uber-ness / geekiness / nerdiness. Thus mathematicians want to ensure that this concept is not to be taken lightly.

Now as we said earlier we want to ensure that when $x$ is close to $a$ then $f(x)$ should be close to $L$. This is like parents want that their kid studies hard to get good marks. The harder the kid studies the better the marks. Now it should be obvious to anyone that the goal here is "to get good marks" and "not just study hard".

So in case of limits the real goal is to ensure is that $f(x)$ gets close to $L$. The part of getting $x$ close to $a$ is only the means to an end. Therefore we have to give a bound $\epsilon$ for $|f(x) - L|$ and then determine $\delta$ such that $|x - a| < \delta$ would be sufficient to ensure the goal. When the goal fails (i.e. for some $\epsilon$ we are not able to get a corresponding $\delta$) we say that $L$ is not the limit of $f(x)$ as $x \to a$.

I should also point out the flaw with your argument. Suppose you challenge me with a $\delta$ and ask me to come up with an $\epsilon$ such that $|f(x) - L| < \epsilon$ whenever $0 < |x - a| < \delta$. Then you have made my task so much easier and I will always win the challenge by choosing a very high value of $\epsilon$. Because you have given me complete leverage on the goal and I can choose to miss the goal by a wide margin (large value of $\epsilon$) whereas you keep on putting greater effort (choosing smaller $\delta$ for challenge). I hope you can understand this logic as to why I would win this challenge all the time if we play the game according to your rules.


Consider the counterexample $$f(x) = \begin{cases} \sin \frac{1}{x}, & x \ne 0 \\ 0, & x = 0 \end{cases}$$ and the limit $$L = \lim_{x \to 0} f(x).$$ Thus let $a = 0$. Under your characterization, given any $\delta > 0$, we can trivially find $\epsilon > 0$ such that $|f(x) - L| < \epsilon$ whenever $|x| < \delta$. For instance, set $\epsilon = 2$; then any choice of $L \in (-1,1)$ will satisfy this "reversed" situation, despite the fact that there is no actual limit. This is why the correct definition requires us to pick $\epsilon$ first, because that is the quantity by which the difference of the value of the function and its limiting value $L$ must be able to be made arbitrarily small. Saying that you are free to choose as small a neighborhood of $x$-values as you please (which is what we are doing if we choose $\delta$) does not guarantee that the function's values in that neighborhood will be increasingly tightly bounded around a limiting point.