Convergence of random variables in probability but not almost surely.

Solution 1:

Consider a sequence $\{X_n\}$ of independent random variables such that $P(X_n=1)=\frac 1n$ and $P(X_n=0)=1-\frac 1n$. For $0<\varepsilon<1/2$ we have $P(|X_n|\geq \varepsilon)=\frac 1n$ which converges to $0$ hence $X_n\to 0$ in probability. Since $\sum_{n\geq 1}P(X_n=1)=+\infty$ and the events $\{X_n=1\}$ are independent, second Borel-Cantelli lemma ensures that $P(\limsup_n \{X_n=1\})=1$ hence the sequence $\{X_n\}$ does not converge to $0$ almost everywhere (in fact the set on which this sequence does not converge to $0$ has probability $1$).

Counter-example for convergence in $L^p$ have already be provided, but I think the example I gave is also useful.

Solution 2:

The most common example is the ''sliding hump.'' What we do is cut $[0,1]$ into, say, two intervals $[0,1/2]=I_{1}$ and $[1/2, 1]=I_{2}$. Then set $f_{1}=\chi_{I_{1}}$ and $f_{2}=\chi_{I_{2}}$. Then, cut $[0,1]$ into three intervals, and consider the characteristic functions on those. Set $f_{3}, f_{4},$ and $f_{5}$ to be the characteristic functions of those intervals. Repeat this process for $n=1, 2, \dots$. It is easy to see that $\{f_{n}\}$ converges to the $0$ function both in probability and in $L^{p}$ for $0<p<\infty$.

The sequence of functions does not converge almost everywhere (in fact it converges nowhere) because each point will land in infinitely many of our smaller intervals, and lie outside of infinitely many. Hence, we can find subsequences of $\{f_{n}\}$ which send our given point to $0$ and subsequences which send it to $1$.

Solution 3:

Consider the sequence of function on [0,1] such that $$f_{n}(x) = \begin{cases} 1 & x \in \left[ \dfrac{k}{2^m},\dfrac{k+1}{2^m}\right]\\ 0 & \text{otherwise}\end{cases}$$ where $m = \lfloor \log_2(n) \rfloor$ and $k = n - 2^m$. This converges in $L^2$ but not almost surely.