Differentiablility over closed intervals

The problem is one of consistent definitions. Intuitively we can make sense of differentiable on a closed interval: but it requires a slightly more careful phrasing of the definition of "differentiable at a point". I don't know which book you are using, but I am betting that it contains (some version of the) following (naive) definition:

Definition A function $f$ is differentiable at $x$ if $\lim_{y\to x} \frac{f(y) - f(x)}{y-x}$ exists and is finite.

To make sense of the limit, often times the textbook will explicitly require that $f$ be defined on an open interval containing $x$. And if the definition of differentiability at a point requires $f$ to be defined on an open interval of the point, the definition of differentiability on a set can only be stated for sets for which every point is contained in an open interval. To illustrate, consider a function $f$ defined only on $[0,1]$. Now you try to determine whether $f$ is differentiable at $0$ by naively applying the above definition. But since $f(y)$ is undefined if $y<0$, the limit

$$ \lim_{y\to 0^-} \frac{f(y) - f(0)}{y} $$

is undefined, and hence the derivative cannot exist at $0$ using one particular reading of the above definition.

For this purposes some people use the notion of semi-derivatives or one-sided derivatives when dealing with boundary points. Other people just make the convention that when speaking of closed intervals, on the boundary the derivative is necessarily defined using a one-sided limit.


Your textbook is not just being pedantic, however. If one wishes to study multivariable calculus, the definition of differentiability which requires taking limits in all directions is much more robust, compared to one-sided limits: the main problem being that in one dimension, given a boundary point, there is clearly a "left" and a "right", and each occupies "half" the available directions. This is no longer the case for domains in higher dimensions. Consider the domain

$$ \Omega = \{ y \leq \sqrt{|x|} \} \subsetneq \mathbb{R}^2$$

$\hspace{5cm}$enter image description here

A particular boundary point of $\Omega$ is the origin. However, from the origin, almost all directions point to inside $\Omega$ (the only one that doesn't is the one that points straight up, in the positive $y$ direction). So the total derivative cannot be defined at the origin if a function $f$ is only defined on $\Omega$. But if you try to loosen the definitions and allow to consider only those "defined" directional derivatives, they may not patch together nicely at all. (A canonical example is the function $$f(x,y) = \begin{cases} 0 & y \leq 0 \\ \text{sgn}(x) y^{3/2} & y > 0\end{cases}$$ where $\text{sgn}$ return $+1$ if $x > 0$, $-1$ if $x < 0$, and $0$ if $x = 0$. Its graph looks like what happens when you tear a piece of paper.)

$\hspace{4cm}$enter image description here


But note that this is mainly a failure of the original naive definition of differentiability (which, however, may be pedagogically more convenient). A much more general notion of differentiability can be defined:

Definition Let $S\subseteq \mathbb{R}$, and $f$ a $\mathbb{R}$-valued function defined over $S$. Let $x\in S$ be a limit point of $S$. Then we say that $f$ is differentiable at $x$ if there exists a linear function $L$ such that for every sequence of points $x_n\in S$ different from $x$ but converging to $x$, we have that $$ \lim_{n\to\infty} \frac{f(x_n) - f(x) - L(x_n-x)}{|x_n - x|} = 0 $$

This definition is a mouthful (and rather hard to teach in an introductory calculus course), but it has several advantages:

  1. It readily includes the case of the closed intervals.
  2. It doesn't even need intervals. For example, you can let $S$ be the set $\{0\} \cup \{1/n\}$ where $n$ range over all positive integers. Then $0$ is a limit point, and so you can consider whether a function defined on this set is differentiable at the origin.
  3. It easily generalises to higher dimensions, and vector valued functions. Just let $f$ take values in $\mathbb{R}^n$, and let the domain $S\subseteq \mathbb{R}^d$. The rest of the definition remains unchanged.
  4. It captures, geometrically, the essence of the differentiation, which is "approximation by tangent planes".

For this definition, you can easily add

Definition If $S\subseteq \mathbb{R}$ is such that every point $x\in S$ is a limit point of $S$, and $f$ is a real valued function on $S$, we say that $f$ is differentiable on $S$ if $f$ is differentiable at all points $x\in S$.

Note how this looks very much like the statement you quoted in your question. In the definition of pointwise differentiability we replaced the condition "$x$ is contained in an open neighborhood" by "$x$ is a limit point". And in the definition of differentiability on a set we just replaced the condition "every point has an open neighborhood" by "every point is a limit point". (This is what I meant by consistency: however you define pointwise differentiability necessarily effect how you define set differentiability.)


If you go on to study differential geometry, this issue manifests behind the definitions for "manifolds", "manifolds with boundaries", and "manifolds with corners".


A function is differentiable on a set $S$, if it is differentiable at every point of $S$. This is the definition that I seen in the beginning/classic calculus texts, and this mirrors the definition of continuity on a set.

So $S$ could be an open interval, closed interval, a finite set, in fact, it could be any set you want.

So yes, we do have a notion of a function being differentiable on a closed interval.

The reason Rolle's theorem talks about differentiabilty on the open interval $(a,b)$ is that it is a weaker assumption than requiring differentiability on $[a,b]$.

Normally, theorems might try to make the assumptions as weak as possible, to be more generally applicable.

For instance, the function:

$$f(x) = x \sin \frac{1}{x}, x \gt 0$$ $$f(0) = 0$$

is continuous at $0$, and differentiable everywhere except at $0$.

You can still apply Rolle's theorem to this function on say the interval $(0,\frac{1}{\pi})$. If the statement of Rolle's theorem required the use of the closed interval, then you could not apply it to this function.