Remark 4.31 in Baby Rudin: How to verify these points?

$\newcommand{\eps}{\varepsilon}$One nice way to investigate the question rigorously is to consider the "unit step function" $$ H(x) = \begin{cases} 0 & \text{if $x \leq 0$,} \\ 1 & \text{if $x > 0$.} \end{cases} $$ The function $H$ is obviously non-decreasing and continuous everywhere except $0$.

For each positive integer $n$, the function $f_{n}(x) = c_{n} H(x - x_{n})$ is the "step of height $c_{n}$ at $x_{n}$"; again, this function is obviously non-decreasing and has a "jump" of size $c_{n}$ at $x_{n}$.

The interesting observation is that $$ f(x) = \sum_{x_{n} < x} f_{n}(x). $$ It follows at once that $f$ is non-decreasing.

Parts (b) and (c) follow almost immediately from the (easy) fact that the preceding series "converges uniformly" to $f$. However, Rudin doesn't discuss uniform limits until Chapter 7 (if memory serves), so we'll have to establish a tool from the definitions.

Lemma: If $x \not\in E$, i.e., if $x \neq x_{n}$ for all $n$, then $f$ is continuous at $x$.

Proof (sketch): Fix $\eps > 0$ arbitrarily. Use summability of $(c_{n})$ to choose a natural number $N$ such that $$ \sum_{n = N+1}^{\infty} c_{n} < \eps. $$ Now pick $\delta > 0$ so that $(x - \delta, x + \delta)$ contains none of the $x_{n}$ with $n \leq N$; for example, take $$ \delta = \min \{|x_{n} - x| : 1 \leq n \leq N\}. $$ If $|x - y| < \delta$, then $$ |f(x) - f(y)| \leq \sum_{n=N+1}^{\infty} c_{n} < \eps. $$ (The first inequality requires justification; the point is, each of $f(x)$ and $f(y)$ is a sum of various $c_{n}$, but if $n \leq N$, then $x_{n}$ does not lie between $x$ and $y$, so "$c_{n}$ does not appear in the difference".)

This lemma handles part (c). Part (b) is immediate from the following "trick": For each $n$, we can "decompose" $f$ as $$ f(x) = \underbrace{f(x) - f_{n}(x)}_{g_{n}(x)} + f_{n}(x). $$ The difference $g_{n}(x)$ on the right-hand side is precisely the function constructed in the same manner as $f$, except by eliminating the point $x_{n}$ from the set $E$, and removing the corresponding summand from $f(x)$. As such $g_{n}$ is continuous at $x_{n}$ by the lemma (!). Since $f_{n}$ has a jump discontinuity at $x_{n}$, $f$ does, as well.


Hint : Note that, $\forall x\in (x_n,x_{n+1}]$, for some $n\in \mathbb{N}$ $$f(x)=\sum_{i=1}^n c_i$$ which shows that $f$ is increasing in $(a,b)$ and also, $f(x_n+)-f(x_n-)=\sum_{i=1}^n c_i-\sum_{i=1}^{n-1} c_i=c_n$ Also, note that the function is left continuous.

It is better to first draw the function in your mind and then go for the $\epsilon-\delta$ proof, which would easily follow from the picture.


I think this example begs for the use of the concept of an absolutely summable family, as defined by Dieudonne in Chapter V, Section 3 of Foundations of Modern Analysis, or (at an introductory undergraduate level) in Chapter 5 of Alan F. Beardon, Limits: A New Approach to Real Analysis.

(Of course one can do without this, and perhaps then it is best to ignore Rudin's remark that "the order in which the terms are arranged is immaterial," which may be a bit of a red herring, because instead of using absolutely summable families one can define the sum of any series $\sum c_{n_k}$, where $( n_k : k \in \mathbb{N} )$ is any strictly increasing sequence, and then of course the order of the terms remains the same.)

If $J = \{ 1, 2, 3, \dotsc \}$, then $( c_n : n \in J \}$ is an absolutely summable family, therefore so is $( c_n : n \in J_x )$, where $J_x = \{ n \in J : x_n < x \}$, for all $x \in (a, b)$, and: $$ f(x) = \sum_{n \in J_x} c_n \qquad (a < x < b). $$

The ordering of the index set $J$ is not used, and $( x_n : n \in J )$ may be just any countable family in $(a, b)$. It is injective, but I don't think we need this. However, for neatness, we can exploit the unused injectivity, as follows:

Take the given countable subset $E \subset (a, b)$ as the index set for the absolutely summable family, which now becomes $( c_x : x \in E )$.

If possible, I won't use the assumption that $E$ is infinite, i.e. $E$ is at most countable.

Define: \begin{gather*} \mu(S) = \sum_{y \in S} c_y \qquad (S \subseteq E), \\ f(x) = \mu(E \cap (a, x)) \qquad (a < x < b). \end{gather*}

Property (a) is trivial.

To prove (b) and (c) together, we need to prove: (i) $f(x-) = f(x)$; (ii) $f(x+) = \mu(E \cap (a, x])$.

Proof of (i).

For all $\epsilon > 0$, there exists finite $F \subset E \cap (a, x)$ such that $\mu(F) > f(x) - \epsilon.$ If $\max(F) < t < x$, then $f(t) > f(x) - \epsilon$. Since we already know that $f(x-) \leqslant f(x)$, this proves that $f(x-) = f(x)$.

Proof of (ii).

Define $g(x) = \mu(E \cap (a, x])$ and $h(x) = \mu(E \cap (x, b))$. Then $g(x) + h(x) = \mu(E)$, which is a constant independent of $x$. By the same argument as in (i) (or else by a change of variable from $x$ to $a + b - x$), we have $h(x+) = h(x)$, therefore $g(x+) = g(x)$. But it is clear that $f(x+) = g(x+)$, because if $x < t < u < b,$ then $f(t) \leqslant g(u)$ and $g(t) \leqslant f(u)$. Hence $f(x+) = g(x)$. Q.E.D.


Given $\varepsilon>0$, choose $M\in\mathbb N$ so large that $\sum_{m=M+1}^\infty c_m<\varepsilon$.

Then choose $\delta>0$ so small that all points in $\{x_1,\ldots,x_M\}$ with the possible exception of $x$ itself, are at a distance $>\delta$ from $x$.

Probably you can take it from there.

PS: So if $|y-x|<\delta$, then $|f(y)-f(x)|$ is a sum of members of the sequence $\{c_n\}_{n=1}^\infty$ whose sum is less than $\varepsilon$, unless $x$ itself is in the sequence. That proves continuity at numbers $x$ that are not in the sequence.

Now suppose $x$ is in the sequence. Then $x= x_k$ for some $k$. Then if $x<y<x+\delta$, then again $|f(y)-f(x)|<\varepsilon$ for the same reason. But in this case, we need to prove that $f(x-)$ would be the sum of all members of $\{c_n\}_{n=1}^\infty$ for which $x_k<x$, and $f(x)=f(x+)$ would be that sum plus $c_k$. So suppose $0<x-y<\delta$. Then $f(y)$ differs from the sum of all members of $\{c_n\}_{n=1}^\infty$ for which $x_k<x$ by a sum of members of $\{c_n\}_{n=1}^\infty$ that is less than $\varepsilon$.