Struggling to understand epsilon-delta
The definition of a limit is:
$\lim_{x\to a}f(x)=L$ if for every $\epsilon > 0$ there is a $\delta > 0$ so that whenever $0 < \lvert x - a \rvert < \delta$ we have $\lvert f(x) - L \rvert < \epsilon$
Now it seems pretty intuitive. But I am hung up on a few problems:
-
Many pictures show something like this:
epsilon-delta
This seems intuitive at first and it demonstrates that $\lvert x - a \rvert$ and $\lvert f(x) - L \rvert < \epsilon$ are not necessarily the same (as the graph can be deceptive, especially if $f(x)$ is a straight line) as when you are projecting from $L$ to the graph down to $a$, $\lvert x - a \rvert$ and $\lvert f(x) - L \rvert$ will be different. The problem in my understanding became apparent when I saw a similar graph in a textbook where the projected lines were not $\lvert f(x) - L \rvert$, but projection for aesthetic purposes and that it was bounded by $\lvert f(x) - L \rvert$. I then realized I don't get it geometrically at all (Google "mooculus", page 20).
I don't understand what the "verification" in the proof is. It seems to be a tautology. For example take $\lvert f(x) - L \rvert < \epsilon \Longrightarrow \lvert (3x - 1) - 2 \rvert < \epsilon$. You will eventually get to $\lvert x - 1 \rvert < \epsilon/3$. Then the proof is "completed" by showing that $\lvert x - a \rvert < \delta \Longrightarrow \lvert x - 1 \rvert < \epsilon/3 \Longrightarrow \lvert f(x) - L \rvert < \epsilon$. But $\delta$ is taken to be $\epsilon/3$. It seems to be the equivalent of demonstrating that $x + 1 = 2$ by plugging in $-3$ into it.
-
The proof starts by either assuming the limit exists or doesn't exist. In fact, I've found many textbooks or teachers to take this approach:
Think of it as a game. You give me an $\epsilon > 0$ and I can give you a $\delta > 0 $...
But they typically omit a glaring part: that if this doesn't hold, the limit doesn't exist. Also glaringly missing, I haven't seen an example of a proof that isn't simply "proving" or "showing" the premise that is already assumed! How would you for example use the epsilon-delta definition to show a limit doesn't exist if you don't already know in advance it is the case?
To extend on number 3, I'm aware that you must choose an $\epsilon$ and that if you prove it for one, you prove it for all. However, the catch is in cases where $\lvert x - a \rvert$ needs to be restricted (i.e, $\lvert x - a \rvert < 1$) $1$ is typically chosen but this does not work in all cases! I have no intuition on how to choose an $\epsilon$ let alone know if I'm simply doing something wrong or if the limit does not exist. That is, proving the negative seems more difficult.
Can someone explain it in a different way? I've resorted to many different .edu sources, free online textbooks and even questions on this site and the pedagogy doesn't seem to be reaching me.
This is not a complete answer but only addressing the comment of user GFauxPas which in turn addresses problem 3 of the original poster:
I've found many textbooks or teachers to take this approach:
Think of it as a game. You give me an ϵ>0 and I can give you a δ>0 ... But they typically omit a glaring part: that if this doesn't hold, the limit doesn't exist.
We show how the guessing of the right $\delta>0$ may fail at discontinuities by a slight modification of the OP's example: \begin{align*} f(x) :=\begin{cases} 3x-1&@x\neq 1\\ 0 &@ x=1 \end{cases} \end{align*} We assume that $f$ would be continuous at $x=1$. Let $\varepsilon>0$ be arbitrarily (small). Searching for an appropriate $\delta>0$ with $|f(x)-f(1)|<\epsilon$ for all $|x-1|<\delta$ we determine the pre-image of the $\varepsilon$-neighborhood $(-\varepsilon,\varepsilon)$ of $f(1)=0$: \begin{align*} \{x\in\mathbb{R}:\,|f(x)-f(1)|<\varepsilon\} &= \underbrace{\{1\}}_{\text{for }f(x)=0}\cup \{x\in\mathbb{R}\setminus\{1\}:\,|3x-1|<\epsilon\}\\ &=\{1\} \cup \left(\frac{1-\varepsilon}3,\frac{1+\varepsilon}3\right) \end{align*} Starting from $\varepsilon=2$ the open interval $\left(\frac{1-\varepsilon}3,\frac{1+\varepsilon}3\right)$ becomes disconnected to the location $x=1$ for shrinking $\varepsilon$. So the argument $x=1$ which maps into the open $\varepsilon$-neighborhood of $f(1)$ becomes insulated in the pre-image of this neighborhood. The pre-image is not a neighborhood of $x=1$ and therefore does not contain the interval $(1-\delta,1+\delta)$ for any $\delta>0$.
The following figure shows the pre-image of $(f(1)-\varepsilon,f(1)+\varepsilon)$ for $\varepsilon=1$, i.e., $f^{-1}((-1,1))=\{x\in\mathbb{R}:\, -1<f(x)<1\}$. We take $\varepsilon=1$ instead of $\varepsilon=2$ to make the insulation of $x=1$ in the pre-image of $(f(1)-\varepsilon,f(1)+\varepsilon)$ more visible. The broken circle ( ) at $(x,y)=(1,2)$ indicates that there is a gap in the linear graph at $x=1$. This gap is filled by $(x,y)=(1,0)$ as it is indicated by the bullet • there.
The blue strip indicates the y-interval $(-1,1)$ which is mapped to the pre-image $f^{-1}((-1,1))$ which is colored yellow. We take special note of the yellow ring around the bullet at $(x,y)=(1,0)$ which is the indicates point of the pre-image.
There is no room around this point for a $\delta$-interval.
Furthermore, a comment on the original posters statement:
I'm aware that you must choose an ϵ and that if you prove it for one, you prove it for all.
That is not true. You must show that for all $\varepsilon>0$ you find $\delta>0$ such that $|f(x)-f(a)|<\varepsilon$ provided $|x-a|<\delta$. You do that by keeping $\varepsilon$ as a free variable and by not binding $\varepsilon$ to any specific number. Only when you disprove continuity it is sufficient to provide a specific $\varepsilon>0$ such that the pre-image of $(f(a)-\varepsilon,f(a)+\varepsilon)$ is not a neighborhood of $a$.
You're correct ... it's a game. I give an $\epsilon>0$, and then your job is to find some $\delta >0$ such that $|f(x) - L|< \epsilon$ whenever $|x-a|< \delta$. If you can always do this, then you win, and the limit exists. Saying it another way, you win if you can always make $|f(x) - L|$ as small as you like (specifically, smaller than the $\epsilon$ I gave you) by choosing a suitable $\delta$.
If I can find an $\epsilon$ for which you can't find a corresponding $\delta$, you lose -- the limit does not exist.
The unfair part is that you have to win an infinite number of rounds. If you win one round (by finding a good $\delta$), I have the right to choose a new $\epsilon$, and we have to play another round.
So, the only way for you to win, really, is to invent some procedure that automatically produces a winning $\delta$ no matter what $\epsilon$ I choose. Typically, your $\delta$ will be given by some rule that depends on the $\epsilon$ I give you. Not a formula, necessarily, but at least some well-defined process.
Let's try a couple of examples:
Example 1:
$f(x) = 2x; \; a = 1; \; L = 2$.
Easy for you to win. Given my value of $\epsilon$, you just choose $\delta = \tfrac12\epsilon$ or even $\delta = \tfrac13\epsilon$. With this procedure for choosing $\delta$, you win every round, so the limit exists. If you draw a picture, you can probably see that the factor $\tfrac12$ that appears in the winning strategy is related to the fact that the function has a slope of $2$. This kind of strategy works provided the function doesn't have an "infinite" slope.
Example 2:
$f(x) = 0$ when $x < 5$ and $f(x) = 6$ when $x \ge 5;\;$ $a = 5;\;$ $L = \text{anything}$.
The value of this function "jumps" from $0$ to $6$ at $x=5$. With this function, you can't win. If I choose $\epsilon = 2$, for example, you're dead. Why did I choose $\epsilon = 2$? Because the function value "jumps" by $6$, and I need a number smaller than $6/2$, for reasons explained below. Then, no matter how small you choose your $\delta$, the interval $[5-\delta, 5+\delta]$ will include some points $x$ where $f(x)=0$, and some points $x$ where $f(x)=6$. Your best shot at proving a limit is to try $L= 3$ (halfway between $0$ and $6$). But even this won't work. No matter how small you choose your $\delta$, there will be points $x$ with $|x-5|<\delta$, but where $|f(x) - 3|> 2$. So, in this case, there is no limit at $x=5$. You run into this losing situation whenever the function has a "jump" in its value at $x=a$. A "jump" is one form of infinite slope; there are other more exotic forms, too.
The $\epsilon$-$\delta$ approach is generally used to provide a rigorous proof that some given number is the limit. But, as you noted, you first have to guess that a limit exists, and you have to guess its value, and the $\epsilon$-$\delta$ stuff is of little help in this regard. You need intuition, guesswork, sketches of graphs, and any other tricks that work for you. Calculus is relatively easy; developing intuition is harder, and is harder to teach, too.