Theorem 6.16 in Baby Rudin: $\int_a^b f d \alpha = \sum_{n=1}^\infty c_n f\left(s_n\right)$

Here is Theorem 6.16 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition:

Suppose $c_n \geq 0$ for $n = 1, 2, 3, \ldots$, $\sum c_n$ converges, $\left\{ s_n \right\}$ is a sequence of distinct points in $(a, b)$, and $$\tag{22} \alpha(x) = \sum_{n=1}^\infty c_n I \left(x-s_n \right). $$ Let $f$ be continuous on $[a, b]$. Then $$\tag{23} \int_a^b f d \alpha = \sum_{n=1}^\infty c_n f \left( s_n \right). $$

And, here is Rudin's proof:

The comparison test shows that the series (22) converges for every $x$. Its sum $\alpha(x)$ is evidently monotonic, and $\alpha(a) = 0$, $\alpha(b) = \sum c_n$.

Let $\varepsilon > 0$ be given, and choose $N$ so that $$ \sum_{N+1}^\infty c_n < \varepsilon. $$ Put $$ \alpha_1(x) = \sum_{n=1}^N c_n I \left( x-s_n \right), \qquad \alpha_2(x) = \sum_{N+1}^\infty c_n I \left( x - s_n \right). $$ By Theorems 6.12 and 6.15, $$\tag{24} \int_a^b f d \alpha_1 = \sum_{n=1}^N c_n f \left( s_n \right). $$ Since $\alpha_2(b) - \alpha_2(a) < \varepsilon$, $$ \tag{25} \left\lvert \int_a^b f d \alpha_2 \right\rvert \leq M \varepsilon, $$ where $M = \sup \lvert f(x) \rvert$. Since $\alpha = \alpha_1 + \alpha_2$, it follows from (24) and (25) that $$\tag{26} \left\lvert \int_a^b f d\alpha - \sum_{n=1}^N c_n f \left( s_n \right) \right\rvert \leq M \varepsilon.$$ If we let $N \to \infty$, we obtain (23).

Here are the links to my earlier posts here on Math SE on Theorems 6.12 and 6.15:

Theorem 6.12:

Theorem 6.12 (a) in Baby Rudin: $\int_a^b \left( f_1 + f_2 \right) d \alpha=\int_a^b f_1 d \alpha + \int_a^b f_2 d \alpha$

Theorem 6.12 (a) in Baby Rudin: If $f\in\mathscr{R}(\alpha)$ on $[a,b]$, then $cf\in\mathscr{R}(\alpha)$ for every constant $c$

Theorem 6.12 (b) in Baby Rudin: If $f_1 \leq f_2$ on $[a, b]$, then $\int_a^b f_1 d\alpha \leq \int_a^b f_2 d\alpha$

Theorem 6.12 (c) in Baby Rudin: If $f\in\mathscr{R}(\alpha)$ on $[a, b]$ and $a<c<b$, then $f\in\mathscr{R}(\alpha)$ on $[a, c]$ and $[c, b]$

Theorem 6.12 (d) in Baby Rudin: If $\lvert f(x) \rvert \leq M$ on $[a, b]$, then $\lvert \int_a^b f d\alpha \rvert \leq \ldots$

Theorem 6.12 (e) in Baby Rudin: If $f \in \mathscr{R}\left(\alpha_1\right)$ and $f \in \mathscr{R}\left(\alpha_2\right)$, then $\ldots$

Theorem 6.12 (e) in Baby Rudin: If $f \in \mathscr{R}(\alpha)$ and $c > 0$, then $\ldots$

Theorem 6.15:

Theorem 6.15 in Baby Rudin: If $a<s<b$, $f$ is bounded on $[a,b]$, $f$ is continuous at $s$, and $\alpha(x)=I(x-s)$, then . . .

Finally, here is Theorem 6.8 in Baby Rudin, 3rd edition:

If $f$ is continuous on $[a, b]$, then $f \in \mathscr{R}(\alpha)$ on $[a, b]$.

Now here is my account of Rudin's proof:

As $c_n \geq 0$, $\sum c_n$ converges, and $0 \leq I \left( x-s_n \right) \leq 1$ for each $n= 1, 2, 3, \ldots$, so the series $\sum c_n I \left( x- s_n \right)$ converges as well.

As $a < s_n < b$, so $I \left( a - s_n \right) = 0$ and $I \left( b-s_n \right) =1$ for every $n$ and therefore $\alpha(a) = \sum_{n=1}^\infty c_n I \left( a - s_n \right) = 0$ and $\alpha(b) = \sum_{n=1}^\infty c_n I \left( b-s_n \right) = \sum_{n=1}^\infty c_n$.

Moreover, if $a \leq x < y \leq b$, then, for every $n \in \mathbb{N}$, if $s_n < x$, then $s_n < y$ and so if $I \left( x-s_n \right) = 1$, then $I \left( y-s_n \right) = 1$ also; that is, $$ I \left( x - s_n \right) \leq I \left( y - s_n \right) \tag{0} $$ for every natural number $n$.

And, for every $n \in \mathbb{N}$, since $c_n \geq 0$, therefore $$ \sum_{k=1}^n c_k I \left( x-s_k \right) \leq \sum_{k=1}^n c_k I \left( y-s_k \right) $$ for every $n \in \mathbb{N}$, which implies that
$$ \begin{align} \alpha(x) &= \sum_{n=1}^\infty c_n I \left( x-s_n \right) \\ &= \lim_{n \to \infty} \sum_{k=1}^n c_k I \left( x-s_k \right) \\ &\leq \lim_{n \to \infty} \sum_{k=1}^n c_k I \left( y-s_k \right) \\ &= \sum_{n=1}^\infty c_n I \left( y - s_n \right) \\ &= \alpha(y). \end{align} $$ Thus we have shown that $\alpha$ is a monotonically increasing function defined on $[a, b]$, with $\alpha(a) = 0$ and $\alpha(b) = \sum_{n=1}^\infty c_n$.

As $f$ is continuous on $[a, b]$, so $f$ is integrable with respect to $\alpha$ over $[a, b]$, that is, $ \int_a^b f d \alpha$ exists (in the set $\mathbb{R}$ of real numbers).

Now we determine the value of $\int_a^b f d \alpha$ as follows:

Now as $f$ is continuous on the compact set $[a, b]$, $f$ is bounded on $[a, b]$ and so there exists a positive real number $M$ such that $\lvert f(x) \rvert \leq M$ for all $x \in [a, b]$.

Let $\varepsilon > 0$ be given. Let's put $$c \colon= \sum_{n=1}^\infty c_n = \lim_{n \to \infty} \sum_{k=1}^n c_k. $$ Then there is a natural number $N$ such that $$\tag{1} \left\lvert \sum_{k=1}^n c_k - c \right\rvert < { \varepsilon \over M } $$ for every natural number $n \geq N$.

But $c_k \geq 0$ for all $k \in \mathbb{N}$, so the sequence $\left\{ \sum_{k=1}^n c_k \right\}_{n \in \mathbb{N}}$ of the partial sums of the series $\sum c_n$ is monotonically increasing and therefore $$ c = \sup \left\{ \ \sum_{k=1}^n c_k \ \colon \ n \in \mathbb{N} \ \right\},$$ which implies that $ \sum_{k=1}^n c_k \leq c$ for every natural number $n$. So (1) takes the form $$ c - \sum_{k=1}^n c_k < {\varepsilon \over M }$$ for every natural number $n \geq N$, which we can write as $$ \sum_{k = n+1 }^\infty c_k < { \varepsilon \over M }. \tag{2} $$ for every natural number $n \geq N$.

Let us fix a natural number $n \geq N$.

Now we put $$ \alpha_1(x) = \sum_{k=1}^n c_k I \left( x - s_k \right), \qquad \alpha_2(x) = \sum_{k= n+1}^\infty c_k I \left( x - s_k \right) $$ for all $x \in [a, b]$. Then $$ \alpha = \alpha_1 + \alpha_2, \tag{3} $$ and from (0) we can conclude both $\alpha_1$ and $\alpha_2$ are monotonically increasing; furthermore , as $a < s_n < b$ for every natural number $n$, so
$I \left( a-s_n \right) = 0$ and $I \left( b - s_n \right) = 1$ and therefore $$ \begin{align} \alpha_2 (b) - \alpha_2 (a) &= \sum_{k = n+1}^\infty c_k I \left( b - s_k \right) - \sum_{k = n+1 }^\infty c_k I \left( a - s_k \right) \\ &= \sum_{k= n+1 }^\infty c_k I \left( b - s_k \right) - 0 \\ &\leq \sum_{k= n+1}^\infty c_k \\ &< { \varepsilon \over M }. \qquad \mbox{ [ using (2) ] }. \tag{4} \end{align} $$

Now $$ \begin{align} \int_a^b f(x) d\alpha_1(x) &= \int_a^b f(x) d\left( \sum_{k=1}^n c_k I \left( x-s_k \right) \right) \\ &= \sum_{k=1}^n \int_a^b f(x) d \left( c_k I \left( x- s_k \right) \right) \qquad \mbox{ [ using Theorem 6.12 (e) ] } \\ &= \sum_{k=1}^n c_k \int_a^b f(x) d \left( I \left( x-s_k \right) \right) \qquad \mbox{ [ using Theorem 6.12 (e) again; $c_n \geq 0$ ] } \\ &= \sum_{k=1}^n c_k f \left( s_k \right). \qquad \mbox{ [ using Theorem 6.15 ] } \end{align} $$ Thus $$ \int_a^b f d \alpha_1 = \sum_{k=1}^n c_k f \left( s_k \right). \tag{5} $$

Now as $M > 0$ by our assumption and as $\lvert f(x) \rvert \leq M$ for all $x \in [a, b]$, so by Theorem 6.12 (d) in Baby Rudin and by (4) above, we see that $$ \left\lvert \int_a^b f d \alpha_2 \right\rvert \leq M \left[ \alpha_2 (b) - \alpha_2(a) \right] < M { \varepsilon \over M } = \varepsilon. \tag{6} $$

Now using (3) and Theorem 6.12 (a) in Baby Rudin, we have $$ \begin{align} \int_a^b f d \alpha &= \int_a^b f d \left( \alpha_1 + \alpha_2 \right) \\ &= \int_a^b f d \alpha_1 + \int_a^b f d \alpha_2 \\ &= \sum_{k=1}^n c_k f \left( s_k \right) + \int_a^b f d \alpha_2, \qquad \mbox{ [ using (5) above ] } \end{align} $$ Therefore $$ \int_a^b f d \alpha - \sum_{k=1}^n c_k f \left( s_k \right) = \int_a^b f d \alpha_2,$$ amd hence from (6) we conclude that $$ \left\lvert \sum_{k = 1 }^n c_k f \left( s_k \right) \ - \ \int_a^b f d \alpha \right\rvert = \left\lvert \int_a^b f d \alpha - \sum_{k = 1 }^n c_k f \left( s_k \right) \right\rvert = \left\lvert \int_a^b f d \alpha_2 \right\rvert < \varepsilon. \tag{7}$$

Thus, we have shown that, corresponding to every real number $\varepsilon > 0$, we can find a natural number $N$ such that (7) holds for every natural number $n \geq N$.

So the sequence $\left\{ \sum_{k=1}^n c_k f \left( s_k \right) \right\}_{n \in \mathbb{N} }$ of the partial sums of the series $\sum c_n f \left( s_n \right)$ converges. Hence the series $\sum c_n f \left( s_n \right)$ converges, with the sum $$ \sum_{n=1}^\infty c_n f \left( s_n \right) = \int_a^b f d \alpha,$$ that is, $$ \int_a^b f d \alpha = \sum_{n=1}^\infty c_n f \left( s_n \right),$$ as required.

Is my understanding of Rudin's proof correct and clear enough? If not, then where have I still left the ambiguities?

Is it essential that $\left\{ s_n \right\}$ be a sequence of distinct points for the conclusion of this theorem to hold? Apparently, this assumption has not been used, has it?


Solution 1:

You do a good job of filling in the details Rudin omits. Sometimes, you actually perhaps go a little too far with this; for example, your explanation of why $(2)$ follows from $(1)$ is not really necessary, and no one would begrudge you for skipping $(1)$ and just writing $(2)$. However, everything your write is completely correct and demonstrates a good understanding of the proof.

As for whether $\{s_n\}$ needs to be distinct: the answer is no. However, you lose no generality in assuming so. To see this, first define $n_1=1$ and $n_k=\inf\{n>n_{k-1}:s_n\neq s_j\text{ for all }j<n\}$. Then, for each $k$ such that $n_k<\infty$, define $\overline s_k=s_{n_k}$. By construction, $\{\overline s_k\}$ is a (possibly finite) sequence of distinct points in $(a,b)$. Define $\overline c_k:=\sum_{n:s_n=\overline s_k}c_n$. Then $\overline c_k\ge0$ and $\sum_k\overline c_k=\sum_n c_n$. Now let $\overline\alpha(x)=\sum_k\overline c_kI(x-\overline s_k)$. Apply the result to this function (or use Theorem 6.12 and 6.15 if the sequence $\{\overline s_k\}$ is finite) to deduce $$\int_a^bf\,d\overline\alpha=\sum_k\overline c_kf(\overline s_k).$$

Now you may simply note that $\overline\alpha=\alpha$ and $\sum_k\overline c_kf(\overline s_k)=\sum_nc_nf(s_n)$ to conlcude.