Understanding a proof of Komlós's theorem
I'm reading a book about probability theory and they use a certain theorem, called Komlós's theorem, which states:
For a sequence $ (\xi_n) $ of random variables on $ (\Omega,\mathcal{F},P) $ with $\sup_n E|\xi_n| < \infty $. Then there is a random variable $ \zeta \in L^1$ and a subsequence $ (\zeta_k) = (\xi_{n_k}) $ such that $$ \frac{\zeta_1+\cdots+\zeta_k}{k} \to \zeta \text{ a.s. }\tag{1}$$ Moreover the subsequence $ (\zeta_k) $ can be chosen in such a way that its further subsequence will also satisfy (1).
So I found a proof of this theorem in the book
"Two-Scale Stochastic Systems" of Yu. Kabanov and S. Pergamenshchikov.
The proof of the theorem can be found in the Appendix, on page 250. Unfortunately, it is not available online. However, I hope there's someone who owns this book and could help me.
The point, where I got stuck is on page 253.
It's clear that we are able to choose this increasing sequence $ n_k $ such that for all $ n \ge n_k $
$$ E\eta^2_k \le E(\xi^{(k)}_n)^2 +2^{-k} \text{ and }|E(\xi^{(k)}_n-\eta_k | \gamma_{j_1},\dots,\gamma_{j_m})| \le 2^{-k}$$
for all $ m\le k-1, j_1<j_2<\dots<j_m $, with $ \gamma_j:= D_j(\xi^{(j)}_{n_j}-\eta_j)$.
Just for completeness, we set $ \zeta_k:= \xi_{n_k} $.
Now I get confused, about the following 3 things:
- Why is $ |E(\gamma_k \mid \gamma_1,\dots,\gamma_{k-1})|\le 2^{-k+1} $ — the above inequalities hold for $ \xi^{(k)}_n-\eta_k $ instead of $ \gamma_k $?
-
What follows the first two inequalities is not clear:
$$ \sum_{i=1}^\infty\frac{1}{k^2}E\gamma_k^2 \le 2\sum_{i=1}^\infty\frac{1}{k^2}E(\zeta_k^{(k)}-\eta_k)^2+ O(1) \le 4 \sum_{i=1}^\infty\frac{1}{k^2}E(\zeta_k^{(k)})^2 +O(1) < \infty.$$
In the last inequality, just calculating:
$$ E(\zeta_k^{(k)}-\eta_k)^2 = E(\zeta_k^{(k)})^2 +2 E\,\zeta_k^{(k)}\eta_k + E\eta_k^2 \le 2 E(\zeta_k^{(k)})^2 +2 E\,\zeta_k^{(k)}\eta_k + 2^{-k}. $$
So the term $ 2^{-k} $ can be controlled, but I don't know how to bound the term $ E\,\zeta_k^{(k)}\eta_k$.
I would appreciate it very much if someone could explain what's going on here.
thx & cheers
math
Since it seems to be difficult, I state the lemma's which the authors need for the proof. I cite:
Lemma 1 : Let $ \eta _n $ be a sequence of random variables convergent weakly in $ L^2 $ to a random variable $ \eta $. Then $$ E|\eta| \le \lim\inf E|\eta_n| \tag{2}$$ $$ E|\eta|^2 \le \lim\inf E|\eta_n|^2 \tag{3}$$
Now a definition:
$$ \xi^{c}:=\xi 1\{|\xi|\le c\} $$ $$ D_m(\xi):=\sum_{i=-\infty}^\infty i2^{-m} 1\{\xi\in (i2^{-m},(i+1)2^{-m}]\} $$
They call them truncation and discretization operators on $ L^0 $.
Lemma 2 : Assume $ \sup_nE|\xi_n| < \infty $ and for every $ k \in \mathbb{N} $ the sequence $ (\xi_n^{(k)}) $ converges weakly in $ L^2 $ to a random variable $ \eta_k $. Then there exists $ \eta \in L^1 $ such that $ \eta_k $ tends to $ \eta $ a.s. and in $ L^1 $.
And the last lemma
Lemma 3 : Let $ \mathcal{G} $ be a $ \sigma $-algebra generated by a finite partition $ A_1,\dots,A_N $ with $ A_i \in \mathcal{F}$. Assume that a sequence of random variables $ (\xi_n) $ converges weakly in $ L^2 $ to zero. Then for any $ \epsilon >0$ there exists $ n_0 =n_0(\epsilon) $ such that $$ E(\xi_n|\mathcal{G})\le \epsilon $$ for all $ n\ge n_0 $.
Solution 1:
There are a lot of notation as the proof is quite long. However, it seems that the answers to your questions are simpler than expected.
We have for each integer $m$ and each random variable $X$ that $|D_m(X)-X|\leqslant 2^{-m}$ almost surely. This is what is used in order to get $$|\mathbb E[\gamma_k\mid \gamma_1,\dots,\gamma_{k-1}]|\leqslant |\mathbb E[D_k(\gamma_k)\mid \gamma_1,\dots,\gamma_{k-1}]|+|\mathbb E[D_k(\gamma_k)-\gamma_k\mid \gamma_1,\dots,\gamma_{k-1}]|\leqslant 2\cdot 2^{-k}.$$
For the second question, notice that $\mathbb E[\eta_k^{(k)}\eta_k]\leqslant \dfrac{(\eta_k^{(k)})^2+\eta_k^2}2$, which is the claimed bound.