Showing that Brownian motion is bounded with non-zero probability

How do you show, that for every bound $\epsilon$, there is a non-zero probability that the motion is bounded on a finite interval. i.e. $$\mathbb{P} (\sup_{t\in[0,1]} |B(t)| < \epsilon) > 0$$

I tried using the reflection principle, I can show that if B* is the motion reflected on the hitting time with the bound, $\tau$, then I think that $$P(\sup_{t\in[0,1]} |B(t)| > \epsilon) = P(|B(1)| > \epsilon) + P(|B*(1)| > \epsilon) - P(\tau < 1, |B(1)-B(\tau)| > 2\epsilon)$$ However, I have no idea how to bound the third term (in a useful way, which would show that this probability is non-one)


Solution 1:

Here's three different methods of showing that $\mathbb{P}(\sup_{t\in[0,1]}\vert B_t\vert < \epsilon)$ is nonzero.

A simple argument based on intuition. You can break the unit interval into a lot of small time steps and, by continuity, the Brownian motion will not move much across each of these steps. By independence of the increments, there is a positive (but small) probability that they largely cancel out, so $B$ stays within $\epsilon$ of the origin. To make this precise, choose a positive integer $n$ such that $q\equiv\mathbb{P}(\sup_{t\le1/n}\vert B_t\vert < \epsilon/2)$ is nonzero ($q$ can be made as close to 1 as you like, by taking $n$ large). By symmetry, the event $\{\sup_{t\le1/n}\vert B_t\vert < \epsilon/2,\ B_{1/n}>0\}$ has probability $q/2$. Note that, if $\sup_{t\in[k/n,(k+1)/n]}\vert B_t-B_{k/n}\vert < \epsilon/2$ and $B_{(k+1)/n}-B_{k/n}$ has the opposite sign to $B_{k/n}$ for each $k=0,1,\ldots,n-1$ then $\vert B_t\vert$ will be bounded by $\epsilon/2$ at the times $k/n$ and, therefore, $\sup_{t\le1}\vert B_t\vert$ will be less than $\epsilon$. So, $\mathbb{P}(\sup_{t\le1}\vert B_t\vert < \epsilon)\ge(q/2)^n$.

Use a cunning trick. If $X,Y$ are independent Brownian motions over the interval $[0,1]$, then $B=(X-Y)/\sqrt{2}$ is also a Brownian motion. The sample paths of $X,Y,B$ can be considered as lying in the (complete, separable) metric space $C([0,1])$ of continuous functions $[0,1]\to\mathbb{R}$ under the supremum norm. By separability, $C([0,1])$ can be covered by countably many open balls of radius $\epsilon/\sqrt{2}$. So, by countable additivity of the probability measure, there exists at least one such ball containing $X$ with probability $q > 0$. By independence, $X,Y$ are both contained in this ball with probability $q^2 > 0$, in which case $\Vert B\Vert_\infty=\Vert X- Y\Vert_\infty/\sqrt{2}<\epsilon$.

Exact calculation. You can calculate an exact expression for the probability, as an infinite sum, and verify that it is dominated by a single positive term as $\epsilon$ goes to zero. This is not as simple as the intuitive argument I gave above, but has the advantage that it also gives an accurate asymptotic expression for the probability, which goes to zero like $e^{-\pi^2/(8\epsilon^2)}$ as $\epsilon\to0$ (this is positive, but tends to zero very quickly).

The probability can be calculated using the reflection principal (also see my comments and Douglas Zare's answer to this question). Writing $p(x)=(2\pi)^{-1/2}e^{-x^2/2}$ for the probability density function of $B_1$ and $f(x)=\sum_{n=-\infty}^\infty(-1)^n1_{\{(2n-1)\epsilon < x < (2n+1)\epsilon\}}$ (which is a kind of square wave function), $$ \mathbb{P}\left(\sup_{t\le1}\vert B_t\vert < \epsilon\right)=\mathbb{E}[f(B_1)]=\int_{-\infty}^\infty f(x)p(x)\,dx.\qquad{\rm(1)} $$ This expression comes from the reflection principle, which says that reflecting $B$ after it first hits $\pm\epsilon$ gives another Brownian motion. That is, $\hat B_t\equiv B_t+1_{\{t\ge T\}}2(B_T-B_t)$ is a Brownian motion, where $T$ is the first time at which $\vert B_T\vert=\epsilon$. As $f$ is antisymmetric about both $\epsilon$ and $-\epsilon$, the sum $f(B_1)+f(\hat B_1)$ vanishes whenever $T\le1$. So, $1_{\{T > 1\}}=(f(B_1)+f(\hat B_1))/2$, and taking the expectation gives (1).

You can perform the integral in (1) directly to express the probability as an infinite sum over the cumulative normal distribution function, but this is not so good in the limit where $\epsilon$ is small, as you don't have a single dominant term. Alternatively, the integral in (1) can be written as $\int_{-\epsilon}^\epsilon\theta(x)\,dx$ where $\theta(x)=\sum_{n=-\infty}^\infty(-1)^np(x+2n\epsilon)$. As $\theta$ has period $4\epsilon$ you can write it as a Fourier series, and working out the coefficients gives $$ \theta(x)=\epsilon^{-1}\sum_{\substack{n > 0,\\n{\rm\ odd}}}\cos\left(\frac{n\pi x}{2\epsilon}\right)\exp\left(-\frac{n^2\pi^2}{8\epsilon^2}\right). $$ This is a very fast converging sum, especially for small $\epsilon$ (the terms vanish much faster then exponentially in $n$). Actually, $\theta$ is a theta function and the fourier transform is the same thing as the Jacobi identity. Integrating it term by term gives $$ \mathbb{P}\left(\sup_{t\le1}\vert B_t\vert < \epsilon\right)=\sum_{\substack{n > 0,\\ n{\rm\ odd}}}\frac{4}{n\pi}(-1)^{(n-1)/2}\exp\left(-\frac{n^2\pi^2}{8\epsilon^2}\right) $$ As the first term goes to zero much more slowly than the sum of the remaining terms (as $\epsilon\to0$) this gives the asymptotic expression $$ \mathbb{P}\left(\sup_{t\le1}\vert B_t\vert < \epsilon\right)\sim\frac{4}{\pi}\exp\left(-\frac{\pi^2}{8\epsilon^2}\right). $$

Solution 2:

There must be a simple argument, but here's a fancy one, roughly following El Moro's idea.

$Y_t = \cos(\theta B_t) e^{t \theta^2/2}$ is a continuous martingale for any $\theta \in \mathbb{R}$. Also, $\tau = \inf \{ t : |B_t| = \epsilon\}$ is a stopping time, so $Y_{t \wedge \tau}$ is also a martingale, and in particular $$E[Y_{t \wedge \tau}] = E[Y_0] = 1 \quad (*)$$ for all $t$. If $P(\sup_{t \in [0,1]} |B_t| < \epsilon) = 0$, then $\tau \le 1$ almost surely, and taking $t=1$ in (*) gives $1 = E[Y_\tau] = \cos(\theta \epsilon) E[e^{\tau \theta^2/2}]$ which is absurd. (For example, take $\theta = \frac{\pi}{2\epsilon}$).

This is sort of a baby version of the optional stopping theorem.

Edit: Here is an even fancier argument, which I found in H. H. Kuo's Gaussian Measures on Banach Spaces.

Let $W = \{ \omega \in C([0,1]) : \omega(0) = 0 \}$ be the classical Wiener space, equipped with the sup norm, and let $\mu$ be Wiener measure. We are trying to show that for any $\epsilon > 0$ we have $\mu(B(0,\epsilon))>0$. As in George's "cunning trick", $W$ is separable, so it can be covered by a countable number of balls of radius $\epsilon/2$. By countable additivity, one of these balls, call it $B(\omega_0, \epsilon/2)$, has positive $\mu$ measure.

Let $H \subset W$ be the Cameron-Martin space of paths with one square-integrable weak derivative. $H$ is dense in $W$, so there exists $h \in H \cap B(\omega_0, \epsilon/2)$. By the Cameron-Martin theorem $\mu$ is quasi-invariant under translation by $H$, i.e. $\mu \ll \mu(\cdot - h)$. Thus $\mu(B(\omega_0 - h, \epsilon/2)) > 0$. By the triangle inequality $B(\omega_0 - h, \epsilon/2) \subset B(0, \epsilon)$ so this completes the proof.

Note this shows that in any abstract Wiener space $(W, H, \mu)$, the measure $\mu$ charges all open sets of $W$, or in other words $\mu$ has full support.