Commutativity of iterated limits

The following is a weird result I've obtained with iterated limits. There must be a flaw somewhere in someone's reasoning but I can't discover what it is. The problem is that, in general, iterated limits are not supposed to be commutative. However, the purported proof below seems to indicate that they always are. Did I make a mistake somewhere?

Let $F$ be a real valued function of two real variables defined in some region around $(a,b)$. Then the standard limit [ correct? ] of $F$ as $(x,y)$ approaches $(a,b)$ equals $L\,$ if and only if for every $\epsilon > 0$ there exists $\delta > 0$ such that $F$ satisfies: $$ | F(x,y) - L | < \epsilon $$ whenever the distance between $(x,y)$ and $(a,b)$ satisfies: $$ 0 < \sqrt{ (x-a)^2 + (y-b)^2 } < \delta $$ We will use the following notation for such limits of functions of two variables: $$ \lim_{(x,y)\rightarrow (a,b)} F(x,y) = L $$ Note. At this moment, we do not wish to consider limits where (one of) the independent variable(s) approaches infinity.

Next we consider the following iterated limit: $$ \lim_{y\rightarrow b} \left[ \lim_{x\rightarrow a} F(x,y) \right] = L $$ Theorem. Commutativity of iterated limits. $$ \lim_{y\rightarrow b} \left[ \lim_{x\rightarrow a} F(x,y) \right] = \lim_{x\rightarrow a} \left[ \lim_{y\rightarrow b} F(x,y) \right] = \lim_{(x,y)\rightarrow (a,b)} F(x,y) $$ Proof. We split the first iterated limit in two pieces: $$ \lim_{x\rightarrow a} F(x,y) = F_a(y) $$ And: $$ \lim_{y\rightarrow b} F_a(y) = L $$ Thus it becomes evident that the (first) iterated limit is actually defined as follows.
For every $\epsilon_x > 0$ there is some $\delta_x > 0$ such that: $$ | F(x,y) - F_a(y) | < \epsilon_x \quad \mbox{whenever} \quad 0 < | x - a | < \delta_x $$ For every $\epsilon_y > 0$ there is some $\delta_y > 0$ such that: $$ | F_a(y) - L | < \epsilon_y \quad \mbox{whenever} \quad 0 < | y - b | < \delta_y $$ Applying the triangle inequality $|a| + |b| \ge |a + b|$ gives: $$ | F(x,y) - F_a(y) | + | F_a(y) - L | \ge | F(x,y) - L | $$ Consequently: $$ | F(x,y) - L | < \epsilon_x + \epsilon_y $$ On the other hand we have: $$ 0 < | x - a | < \delta_x \qquad \mbox{and} \qquad 0 < | y - b | < \delta_y $$ Hence: $$ 0 < \sqrt{ (x-a)^2 + (y-b)^2 } < \sqrt{ \delta_x^2 + \delta_y^2 } $$ This is exactly the definition of the above standard limit of a function of two variables if we put: $$ \epsilon = \epsilon_y + \epsilon_y \qquad \mbox{and} \qquad \delta = \sqrt{\delta_x^2 + \delta_y^2} $$ Therefore: $$ \lim_{y\rightarrow b} \left[ \lim_{x\rightarrow a} F(x,y) \right] = \lim_{(x,y)\rightarrow (a,b)} F(x,y) $$ In very much the same way we can prove that: $$ \lim_{x\rightarrow a} \left[ \lim_{y\rightarrow b} F(x,y) \right] = \lim_{(x,y)\rightarrow (a,b)} F(x,y) $$ QED


Solution 1:

Great question! The error in your proof is somewhat subtle.

Let's assume that $F_a(y) = \lim_{x\to a}F(x,y)$ exists for all $y$ sufficiently close to $b$. Let $\epsilon > 0$. Then by definition there exists $\delta(y) > 0$ possibly dependent on $y$ such that

$$ |x - a| < \delta(y) \implies |F(x,y) - F_a(y)| < \epsilon/2. $$

Assume further that $L = \lim_{y\to b}F_a(y)$ exists. Then there exists $\delta > 0$ such that $$ |y - b| < \delta \implies |F_a(y) - L| < \epsilon/2. $$

So if $|y-b| < \delta$ and $|x-a| < \delta(y)$, then we do in fact have $$ |F(x,y) - L| \leq \epsilon. $$ However, this is only for $|y-b| < \delta$ and $|x-a| < \delta(y)$, which (depending on the function $\delta(y)$) may not be a neighborhood of $(x,y)$ in $\mathbb{R}^2$. Accordingly, this is not the same as saying $\lim_{(x,y)\to(a,b)}F(x,y) = L$. Similar statements can be made about the second half of the argument.

Hope this makes sense!


Edit: Thanks for clarifying your assumptions in your comment. Good news: you're right! In fact, all you need to assume is that $\lim_{(x,y)\to(a,b)}F(x,y)$ exists. Intuitively, this is because $\lim_{(x,y)\to(a,b)}F(x,y) = L$ means that $F(x,y) \to L$ as $(x,y)$ approaches $(a,b)$ along any path, and you can think of the iterated limits as limits of $F(x,y)$ as $(x,y)$ approaches $(a,b)$ along particular paths. So the assumption $F(x,y) \to L$ as $(x,y) \to (a,b)$ includes the statements $\lim_{x\to a}\lim_{y\to b}F(x,y) = L$ and $\lim_{y\to b}\lim_{x\to a}F(x,y)$ as special cases.

A precise proof of this claim is probably easiest to write by starting with the definition of the limit $L = \lim_{(x,y) \to (a,b)}F(x,y)$ and showing (in a similar fashion to what you were doing) that the iterated limits both equal $L$.


Edit 2: This is in response to your last comment:

Assume for example that a>0 and b>0 and construct two paths e.g. from the origin $(0,0)$ to the limit point $(a,b)$. Let $(x_1(t),y_1(t))=(t,0)$ for $0\leq t\leq a$ and $=(a,t−a)$ for $a≤t≤a+b$. Define $G_1(t)=F(x_1(t),y_1(t))$ and the "path limit" $\lim_{t\to a+b}G_1(t)$. Let $(x_2(t),y_2(t))=(0,t)$ for $0\leq t\leq b$ and $=(t−b,b)$ for $b\leq t\leq a+b$. Define $G_2(t)=F(x_2(t),y_2(t))$ and the "path limit" $\lim_{t\to a+b}G_2(t)$. Then I agree with you that, in general, $\lim_{t\to a+b}G_1(t)\neq \lim_{t\to a+b}G_2(t)$. But these are not iterated limits.

It doesn't matter that they're not iterated limits. If you reread my last comment, my point is that even if the iterated limits converge to the same value, i.e., $\lim_{x\to a}\lim_{y\to b}F(x,y) = L = \lim_{y\to b}\lim_{x\to a}F(x,y)$, it may very well be the case that the limit $\lim_{(x,y)\to(a,b)}F(x,y)$ doesn't exist, since it might not converge to $L$ along other paths to $(a,b)$ (which by the definition of $\lim_{(x,y)\to(a,b)}F(x,y)$ must all converge to the same value). A standard example of this is $F(x,y) = \frac{xy}{x^2+y^2}$ as $(x,y) \to (0,0)$. It's easy to see that $$ \lim_{x\to 0}\lim_{y\to0}\frac{xy}{x^2+y^2} = \lim_{y\to 0}\lim_{x\to 0}\frac{xy}{x^2+y^2} = 0. $$ In fact $F(x,0) = 0 = F(0,y)$ for all $x,y \neq 0$, and so every neighborhood of $(0,0)$ contains a point $(x,y)$ at which $F(x,y) = 0$.

However, the limit $\lim_{(x,y)\to(0,0)}\frac{xy}{x^2+y^2}$ does not exist. For example, along the line $y=x$, the function equals $$ F(x,x) = \frac{x^2}{2x^2} = \frac{1}{2}. $$ So every neighborhood of $(0,0)$ contains a point $(x,y)\neq (0,0)$ at which $F(x,y) = 1/2$. (For any $\delta > 0$, choose $(x,y) = (x,x)$ where $|x| < \delta/\sqrt2$. Then $\|(x,x)-(0,0)\| = \sqrt{2x^2} < \delta$, and we've shown $F(x,x) = 1/2$.) We've shown that $F(x,y)$ takes on the values $0$ and $1/2$ in every neighborhood of $(0,0)$, and so $\lim_{(x,y)\to(0,0)}F(x,y)$ does not exist.

Solution 2:

There is a property called "uniform continuity" and is stronger than "side-way" continuity. Consider the sequence of functions $$f_n(x)=n^2xe^{-nx}, \; x>0,\; n=1,2,\dots$$ It's easy to show that for each $n$, the maximum of $f_n(x)$ over all $x\ge0$ is $\;n/e$ and is achieved at $x_n=1/n$. Also for any fixed $x\ge0$, $f_n(x) \to 0$ as $n \to \infty$. The notation $\lim_{n \to \infty}f_n(x)=0$ is somewhat misleading in this context. How can we have that and still have a sequence $x_n$ such that $f_n(x_n)=n/e$ ? The definition goes: $\forall \epsilon \gt 0, \exists \; N \gt 0$ such that $f_n(x) \lt \epsilon \; \forall \;n \gt N$. But this $N$ may be (and it is in this example) a (unbounded) function of $x$. The definition for uniform convergence at say $\bar{x}$ goes: $\forall \epsilon \gt 0, \exists \; N \gt 0$ and an $\alpha \gt 0$ such that $f_n(x) \lt \epsilon \; \forall \;n \gt N$ and $\forall x \in(\bar{x}- \alpha,\bar{x}+\alpha)$. Here we have continuity and uniform continuity everywhere but we don't have uniform continuity at $x=0$. To cut the long story short, if you have the existence of a "side-ways" limit over e.g. $x$ and uniform continuity in $y$ then you have it all. Similarly starting with $y$.

Solution 3:

Rather a comment than an answer. The only thing missing in Georgy's answer is an integral: $$ \int_0^\infty n^2xe^{-nx} dx = -\frac{1}{n} \int_0^\infty n^2x\, d e^{-nx} = \left[ - \frac{n^2x e^{-nx}}{n} \right]_0^\infty + \int_0^\infty n e^{-nx} dx = 0-\left[e^{-nx}\right]_0^\infty = 1 $$ And a picture, where the grey area is independent of $\,n$ , i.e. the one just calculated:

enter image description here

It is seen that Georgy's function converges to a Dirac delta at $x=0$.
This is similar to the behaviour of a simpler function at : Iterated Limits Schizophrenia .