How "messy" can a multivariable function be?

In calculus lectures, we are told that approaching a limit in a multivariable function through some paths does not guarantee the existence of the limit, for example: here. And one can see that, in the following example its easy to see that there are $2$ paths that seem to point, in one case, that the limit is $0$ and in another case, that the limit is $1$ and hence it does not exist.

$$f(x,y)=\frac{2xy}{x^2+y^2} \quad f(0,0)=0 $$

But the objects we have in question are continuous functions with the exception of perhaps, the point $(0,0)$ or some other strategically placed point. It seems intuitively reasonable that one can approach $(0,0)$ though $2$ or some "low" number of different paths and have different possible limits but it doesn't seems reasonable that one could have a function such that the possible limit is different for $278$ paths, for example.

I'm not sure if these lectures try to take into account the full generality of multivariable functions, but it seems absurd that a $2-$variable function can have such a messy structure that actually allows it to have an arbitrary number of paths that suggest (each one) different possible limits. But I don't know what to say/think when the number of variables is greater than $3$.

EDIT: There are interesting related sub-questions I forgot to add:

  1. Is it possible that we test the existence of a limit with all the lines and polar coordinates, obtain a possible limit $L$ for these tests and yet, the function have another path such that the possible limit for this path is different of $L$?

  2. The question above glooms to this: Isn't there a minimum number of paths given by families of curves in which, the possible limit is $L$ for all of them and this actually guarantees that the limit is $L$? I make this question because it seems extremely counter-intuitive that given - for example - tests with all the lines, all the parabolas and polar coordinates all with possible limit $L$, there could still be one path that gives a possible limit different of $L$.


Solution 1:

The granddaddy of ill-behaved functions is $e^{1/x}$. Even in one dimension it is very strange (having what is called an "essential singularity" at the origin), but in 2D or in the complex plane we really see an exotic being take shape:

According to Picard's Great Theorem, $e^{1/z}$ (viewed as a function of a complex number $z$) takes every value except $0$ infinitely many times in any neighborhood of the origin $z=0$. Away from the origin it is perfectly nice and smooth and analytic.

We can make a similarly weird function which is purely real by looking at $$Re\left\{\exp\left(\frac{1}{(x+iy)}\right)\right\} = \exp\left(\frac{x}{x^2+y^2}\right)\cos\left(\frac{y}{x^2+y^2}\right)$$ in the x-y plane. From Picard's Great Theorem it follows that this function also attains every value except 0 infinitely many times in any neighborhood of the origin, but is smooth everywhere else. Examining this function along paths to the origin gives arbitrarily wild curves.

Re(exp(1/(x+iy)))

Solution 2:

Consider your function as we approach the origin along the line $y=mx$ of slope $m$ we get. . .

$\displaystyle f(x,mx)=\frac{2x(mx)}{x^2+(mx)^2} = \frac{2mx^2}{(m^2+1)x^2} = \frac{2m}{(m^2+1)}$

So the function is constant along that line and equal to its limit at the origin. It should be easy to see that as we vary $m$ we get an infinite number or limits to chose between.

Can you come up with a continuous function that has only finitely many limits as we approach a singularity?

Solution 3:

Besides all the beautiful examples that have been given already, I wanted to point out that even approaching along a "path" does not necessarily mean that there exists a limit at all. This phenomenon is no specialty of higher dimenions, it can happen in just one dimension.

Consider, for instance, the function \begin{align*} f:\mathbb R\to\mathbb R,\ x\mapsto\begin{cases}\frac{1}{x}\sin(\frac{1}{x}), & x\neq 0, \\ 0, & x=0.\end{cases} \end{align*} This function is so mean around 0, that one can find for every real number $\alpha$ a sequence $(x_n)$ of real numbers converging to 0 such that $f(x_n)$ converges to $\alpha$. Even if one picks $\alpha=\pm\infty$.

In one dimension, however, we have the nice fact that if right and left sided limits at some $x$ exist and coincide with $f(x)$, then $f$ is continuous at $x$. This is no longer true in two dimensions. The first obvious reason is that there is not only "right" and "left", but perhaps also some "top" or "bottom", or any crazy direction you can approach from. However, if the limits approaching some point $(x,y)\in\mathbb R\times\mathbb R$ exist along any line, and are equal, and coincide with $f(x,y)$, $f$ still does not have to be continuous at $(x,y)$. For example, consider \begin{align*} f:\mathbb R\times\mathbb R\to\mathbb R,\ (x,y)\mapsto\begin{cases}1, & (x,y)=t(\cos(t),\sin(t))\text{ for some }t> 0, \\ 0, & \text{elsewhere}.\end{cases} \end{align*} The points $t(\cos(t),\sin(t))$ for $t>0$ describe a spiral emanating from $(0,0)$... enter image description here

...and $f$ is $1$ for points on that spiral, but $0$ otherwise. If one now approaches $(0,0)$ along a stright line starting at some point in the plane, then this line hits the spiral only finitely many times, and therefore the limit of $f$ is 0, which coincides with $f(0,0)$. But $f$ is clearly not continuous at $(0,0)$, since, approaching along the spiral itself, the limit is 1.

Solution 4:

You see many fine examples here, but one point has not been addressed: You should not consider a limit $(x,y)\to(0,0)$ in terms of paths, but in terms of neighborhoods. When you feel obliged to test convergence in terms of paths you have to test more paths than there are atoms in the universe (even defining this set of paths is a formidable task) for a single instance of $\lim_{(x,y)\to(0,0)} f(x,y)=L$, whereas in terms of neighborhoods you have to establish a simple $\epsilon/\delta$ relation in terms of inequalities: You have to ascertain that for each given $\epsilon>0$ there is a $\delta>0$ such that $0<\|(x,y)\|<\delta$ implies $|f(x,y)-L|<\epsilon$.