Convergence types in probability theory : Counterexamples
Solution 1:
-
Convergence in probability does not imply convergence almost surely: Consider the sequence of random variables $(X_n)_{n \in \mathbb{N}}$ on the probability space $((0,1],\mathcal{B}((0,1]))$ (endowed with Lebesgue measure $\lambda$) defined by $$\begin{align*} X_1(\omega) &:= 1_{\big(\frac{1}{2},1 \big]}(\omega) \\ X_2(\omega) &:= 1_{\big(0, \frac{1}{2}\big]}(\omega) \\ X_3(\omega) &:= 1_{\big(\frac{3}{4},1 \big]}(\omega) \\ X_4(\omega) &:= 1_{\big(\frac{1}{2},\frac{3}{4} \big]}(\omega)\\ &\vdots \end{align*}$$ Then $X_n$ does not convergence almost surely (since for any $\omega \in (0,1]$ and $N \in \mathbb{N}$ there exist $m,n \geq N$ such that $X_n(\omega)=1$ and $X_m(\omega)=0$). On the other hand, since $$\mathbb{P}(|X_n|>0) \to 0 \qquad \text{as} \, \, n \to \infty,$$ it follows easily that $X_n$ converges in probability to $0$.
-
Convergence in distribution does not imply convergence in probability: Take any two random variables $X$ and $Y$ such that $X \neq Y$ almost surely but $X=Y$ in distribution. Then the sequence $$X_n := X, \qquad n \in \mathbb{N}$$ converges in distribution to $Y$. On the other hand, we have $$\mathbb{P}(|X_n-Y|>\epsilon) = \mathbb{P}(|X-Y|>\epsilon) >0$$ for $\epsilon>0$ sufficiently small, i.e. $X_n$ does not converge in probability to $Y$.
-
Convergence in probability does not imply convergence in $L^p$ I: Consider the probability space $((0,1],\mathcal{B}((0,1]),\lambda|_{(0,1]})$ and define $$X_n(\omega) := \frac{1}{\omega} 1_{\big(0, \frac{1}{n}\big]}(\omega).$$ It is not difficult to see that $X_n \to 0$ almost surely; hence in particular $X_n \to 0$ in probability. As $X_n \notin L^1$, convergence in $L^1$ does not hold. Note that $L^1$-convergence fails because the random variables are not integrable.
-
Convergence in probability does not imply convergence in $L^p$ II: Consider the probability space $((0,1],\mathcal{B}((0,1]),\lambda|_{(0,1]})$ and define $$X_n(\omega) := n 1_{\big(0, \frac{1}{n}\big]}(\omega).$$ Then $$\mathbb{P}(|X_n|>\epsilon) = \frac{1}{n} \to 0 \qquad \text{as} \, \, n \to \infty$$ for any $\epsilon \in (0,1)$. This shows that $X_n \to 0$ in probability. Since $$\mathbb{E}X_n = n \cdot \frac{1}{n} = 1$$ the sequence does not converge to $0$ in $L^1$. Note that $L^1$-convergence fails although the random variables are integrable. (Just as a side remark: This example shows that convergence in probability does also not imply convergence in $L^p_{\text{loc}}$.)