One of inequalities in the proof of Martingale transforms convergence
I'm studying a proof of Theorem 1 in the article "Martingale transforms" by D.L. Burkholder. I can't understand one moment in the second part of the proof.
We have uniformly bounded submartingale $ f = (f_1,f_1,\ldots) $ on a probability space $ (\Omega, G, \textbf{P}) $ and predictable sequence ($ v_n $ is a real $ G_{n-1}- $ measurable function) $ v = (v_1,v_2,\ldots) $, $ g $ is transform of $ f $ and $ v $ defined by the way: \begin{equation} g = \sum\limits_{n=1}^{\infty} v_n d_n, \end{equation} where $ d_1 = f_1, d_n = f_n - f_{n-1}, n\ge2 $.
A provable statement is "If $ g $ is a transform of a uniformly bounded submartingale $ f $ and $ v^* \le 1 $, $\; v^*(\omega) = \sup\limits_{n\ge1}|v_n(\omega)| $, then $ g $ converges almost everywhere."
We may assume that $ f \ge 0 $. Therefore, $ \sum_{k=1}^n \mathbf{E}d_k^2 \le \mathbf{E}f_n^2 $, $ \; n \ge 1 $, since, for $ n \ge 2 $, $ \mathbf{E}f_{n-1}d_n = \mathbf{E}[f_{n-1}\mathbf{E}(d_n|G_{n-1})] \ge 0 $ implying that $ \mathbf{E}f_n^2 = \mathbf{E}(f_{n-1} + d_n)^2 \ge \mathbf{E}f_{n-1}^2 + \mathbf{E}d_n^2 $.
Let $ \hat{f} = ( \hat{f}_1, \hat{f}_2, \ldots) $ and $ \hat{g} = (\hat{g}_1, \hat{g}_1, \ldots) $ be defined by $ \hat{f}_n = \sum_{k=1}^n \hat{d}_k, \; \hat{g}_n = \sum_{k=1}^n v_k \hat{d}_k, \; n\ge1 $, where $ \hat{d}_1 = d_1 \; \text{and} \; \hat{d}_n = d_n - \mathbf{E}(d_n|G_{n-1}), \; n\ge 2$. Then $ \hat{f} $ is a martingale and $ \hat{g} $ is a transform of of $ \hat{f} $.
And there it is. I don't understand how to explain this inequality, why is it true? \begin{equation} \mathbf{E}\hat{d}_n^2 = \mathbf{E}[d_n - \mathbf{E}(d_n|G_{n-1})]^2 \le \mathbf{E}d_n^2, \; n\ge 2 \end{equation}
I will be very grateful if someone can help me figure it out.
Since everything is $L^2$, the conditional expectation is simply the orthogonal projection on the appropriate subspace.
The inequality is thus a consequence of the pythagorean theorem: $$\mathbf{E}\left[\left(d_n - \mathbf{E}(d_n|G_{n-1})\right)^2\right] = \left\lVert d_n - p(d_n)\right\rVert^2 = \left \lVert d_n\right\rVert^2 - \left \lVert p(d_n)\right\rVert^2\leq \left \lVert d_n\right\rVert^2=\mathbf{E}[d_n^2],$$ where $p$ denotes the orthogonal projection on the subspace $L^2(\Omega, G_{n-1},\mathbf{P})$ and $\lVert\cdot \rVert$ is the $L^2$ norm.