The proof of Suppose that $X_n$ converges in $L^2$ to $X$. Then the convergence also holds in all $L^p$ spaces for $1\le p<\infty$.

I try to read the proof of the statement in textbook Brownian motion, martingales, and stochastic calculus http://eprints.stta.ac.id/198/1/2016_Book_BrownianMotionMartingalesAndSt.pdf:

[Page 3 Prop 1.1] Let $X_n$ for $n\ge 1$ be a seq. of real r.v. s.t. $X_n$ follows $N(m_n, \sigma_n^2)$ distribution. Suppose that $X_n$ converges in $L^2$ to $X$. Then the convergence also holds in all $L^p$ spaces for $1\le p<\infty$.

In the proof, we first know that $$ \sup_n E(|X_n-X|^q)<\infty, \text{for all } \, q\ge 1. $$

Let $p\ge 1$. Why does seq. $Y_n=|X_n-X|^p$ converge in probability to $0$? And how to get the result that $Y_n$ converges to $0$ in $L^1$?


Solution 1:

Part (i) of the proof:

Note that $L^2$ convergence $\Rightarrow$ convergence in probability $\Rightarrow$ weak convergence. This ensures convergence of the characteristic function, i.e. $Ee^{i\xi X}=\lim_{n\to \infty}Ee^{i\xi X_n}$, and thus implies $X \backsim N(m, \sigma^2)$ since $m_n \to m$ and $\sigma_n \to \sigma$.

Part (ii) of the proof:

Since $L^2$ convergence implies convergence in probability, we find for any $\varepsilon>0$, $$ \lim_{n\to \infty}P(|X_n-X|>\varepsilon) = 0 $$

For any $p>0$, this can be rewritten as $$ \lim_{n\to \infty}P(|X_n-X|^p>\varepsilon^p) = 0 $$ Hence, $Y_n = |X_n-X|^p$ converges to $0$ in probability.

(i) implies $X_n =_d m_n+\sigma_nN$ where $N\backsim N(0,1)$. By the triangular inequality ($L^q$), for any $q\geq 1$, $$ \sup_n (E|X_n|^q)^{1/q} \leq \sup_n\left \{(E|m_n|^q)^{1/q} + (E(\sigma_nN)^q)^{1/q}\right \} =\sup_n(|m_n|+\sigma_n\|N\|_q)<\infty $$ since $m_n\to m$ and $\sigma_n \to \sigma$. Using the triangular inequality again implies $$ \sup_nE|X_n-X|^q < \infty. $$

This combined with Hölder's inequality implies, for any $p\geq 1$, $$ E(|X_n-X|^p; |X_n-X|^p>M) \leq (E|X_n-X|^{2p})^{1/2}\cdot P(|X_n-X|^p>M)^{1/2} \leq C P(|X_n-X|^p>M)^{1/2}, $$ where $C$ is a constant derived by the upper bound of $E|X_n-X|^q$ with $q=2p$.

The last term converges to $0$ as $M \to \infty$ since $X_n =_d m_n+\sigma_nN$, $X =_d m+\sigma N$, and $m_n \to m$ and $\sigma_n \to \sigma$. Therefore we conclude $Y_n = |X_n-X|^p$ is uniformly integrable.

Since $Y_n$ converges to $0$ in probability and is uniformly integrable, we conclude $Y_n \to 0$ in $L^1$.