Oksendal SDEs: What does he want from Exercise 2.1?

I'm stuck on the very first question of the book! Here's a link to the book.

Here is the question:

2.1 Let $X:\Omega \to \mathbb{R}$ assuming countably many real values $ \{a_1,a_2...\}$

a) Show that $X$ is $\mathcal{F}$-measurable if and only if $X^{-1}(a_k) \in \mathcal{F} \forall k$.

b) Show $\mathbb{E}[ |X| ] = \sum_{k =1}^\infty |a_k| \mathbb{P}( X = a_k ) $ (1)

c) assuming (1), $\mathbb{E}[ X ] = \sum_{k =1}^\infty a_k \mathbb{P}( X = a_k ) $ (2)

d) for f measurable and bounded, $\mathbb{E}[ f(X) ] = \sum_{k =1}^\infty f(a_k) \mathbb{P}( X = a_k ) $ (3)


Now a) is straightforward and follows from discussion on page 8. My problem with the question is that I know how to do it 2b-2d, but have no idea how to do it assuming what the book has told me so far. The only mention we've had of expectation is "for f measurable, $\mathbb{E}[ f(X)] := \int_\Omega f(X(\omega)) d\mathbb{P}( \omega) $".

I mean sure, I could define $Z_n(\omega) = \sum_{k=1}^{k=n} |a_k| \mathbb{I}\{ \omega \in X^{-1}(\omega) \}$ and use monotone convergence theorem. By splitting $X$ into positive and negative parts and doing something similar we can yield c). And d) follows from the same trick using bounded convergence theorem instead.

The thing is, there's no mention of either of these theorems anywhere in the book (appendix included), so I feel like I'm missing the point? I don't understand what the point of introducing the definition of a sigma algebra is if we are going to proceed silently assuming knowledge of convergence theorems.

Again, I have exactly the same problem in 2.2 below,

2.2 $X: \Omega \to \mathbb{R}$. $F(x) = \mathbb{P}(X \leq x)$.

a) Prove $0 \leq F \leq 1$, $F(\infty-) = 1$, $F( -\infty+) = 0$. Prove $F$ increasing. Prove $F$ right continuous

b) Let $g:\mathbb{R} \to \mathbb{R}$. Prove $\mathbb{E}[ g(X)] = \int_{ - \infty}^{\infty} g(x) dF(x)$


Recall that $$\{X=a_k\} = \{\omega\in\Omega:X(\omega)=a_k\} = X^{-1}(\{a_k\}), $$ and so $$\mathbb P(X=a_k) = \mathbb P\circ X^{-1}(\{a_k\}). $$ It follows that $\mathbb P\circ X^{-1}$ defines a probability measure on $\mathbb R$, with $$\mathbb P\circ X^{-1}(B) = \sum_{a_k\in B}\mathbb P\circ X^{-1}(\{a_k\}) $$ (since $i\ne j$ implies $\{a_i\}\cap\{a_j\}=\varnothing$ and thus $X^{-1}(\{a_i\})\cap X^{-1}(\{a_j\})=\varnothing$, and by assumption, $\sum_{k=1}^\infty \mathbb P(X=a_k)=1$). Moreover, $$|X|(\omega) = a_k \iff X(\omega) = |a_k|, $$ and $\{X=-|a_k|\}=\varnothing$. Hence we compute \begin{align} \mathbb E[|X|] &= \int_\Omega |X|\, \mathsf d\mathbb P\\ &= \int_{\bigcup_{k=1}^\infty |X|^{-1}(\{a_k\}) } |X|\, \mathsf d \mathbb P\\ &=\int_{\bigcup_{k=1}^\infty X^{-1}(\{|a_k|\}) } |X|\, \mathsf d \mathbb P\\ &= \int_\Omega X\mathsf 1_{\bigcup_{k=1}^\infty X^{-1}(\{a_k\})}\, \mathsf d \mathbb P\\ &= \int_{\Omega}\sum_{k=1}^\infty X\mathsf1_{X^{-1}(\{a_k\})}\,\mathsf d\mathbb P\\ &= \sum_{k=1}^\infty \int_{X^{-1}(\{a_k\})} |X|\,\mathsf d\mathbb P\\ &= \sum_{k=1}^\infty |a_k|\mathbb P(X=a_k). \end{align} (The interchange in sum/integral is valid because all quantities are positive.)

If $\mathbb E[|X|]<\infty$ then from $\lim_{k\to\infty}\mathbb P(X=a_k)$ it follows that $\mathbb E[X]<\infty$.

If $f$ is a measurable function, then $f(X)$ is a random variable, and a direct computation verifies the formula for $\mathbb E[f(X)]$ (commonly known as the "Law of the Unconscious Statistician").