Let $(W_t)$ be a standard Brownian motion, so that $W_t \sim N(0,t)$. I'm trying to show that the random variable defined by $Z_t = \int_0^t W_s \ ds$ is a Gaussian random variable, but have not gotten very far.

I tried approximating the integral by a Riemann sum: choose $\delta, M$ such that $M\delta = t$, then the integral is approximated by $$ \sum_{k=0}^{M-1} (W_{(k+1)\delta} - W_{k\delta} )\delta = \delta \sum\limits_{k=0}^{M-1} X_k $$ where using standard properties of the Brownian motion, the $X_k$'s are independent identically distributed $N(0, \delta)$ random variables. So I find that $Z_t$ is approximated by a random variable with distribution $ N(0, M\delta^3) = N(0,t\delta^2) $. Now letting $ \delta \to 0$, I find the variance of $Z_t$ is also $0$, which does not make sense to me.

Any help is appreciated!


Solution 1:

First of all, the Riemann sum is given by

$$\sum_{k=0}^{M-1} W_{k \delta} \cdot (\delta (k+1)-\delta k).$$

Note that this expression does not equal

$$\sum_{k=0}^{M-1} (W_{(k+1)\delta}-W_{k \delta}) \cdot \delta.$$


Let $t_k := \delta \cdot k$, then

$$\begin{align} G_M &:= \sum_{k=0}^{M-1} W_{k \cdot \delta} \cdot (t_{k+1}-t_k) =\ldots= \sum_{k=0}^{M-1} (W_{t_{k-1}} - W_{t_k}) \cdot t_k + W_{t_{M-1}} \cdot t \\ &= \sum_{k=0}^{M-1} (W_{t_{k-1}}-W_{t_k}) \cdot (t_k-t) \end{align}$$

where $t_{-1}:=0$. Clearly, $G_M$ is Gaussian, $\mathbb{E}G_M=0$ and (using the independence of the increments)

$$\begin{align*} \mathbb{E}(G_M^2)& = \sum_{k=0}^{M-1} (t_k-t)^2 \cdot \underbrace{\mathbb{E}((W_{t_k}-W_{t_{k-1}})^2)}_{t_k-t_{k-1}} \\ &\to \int_0^t (s-t)^2 \, ds \quad \text{as} \, \, M \to \infty. \end{align*}$$

Hence, as $G_M \to Z_t$ as $M \to \infty$ almost surely, we conclude that $Z_t$ is Gaussian with mean $0$ and variance $\int_0^t (s-t)^2 \, ds$ (see this question for further details).

Remark: In fact, the statement holds in a more general setting. The random variable $Y_t := \int_0^t X_s \, ds$ is Gaussian for any (measurable) Gaussian process $(X_t)_{t \geq 0}$, see this question.

Solution 2:

I just found out that we can use the following fact:

If $f:[0,T] \rightarrow [0,T]$ is continuous and deterministic, then \begin{equation} \int_{0}^T \bigg( \int_{0}^T f(s,t) \,dW_s \bigg) \,dt = \int_{0}^T \bigg( \int_{0}^T f(s,t) \, dt \bigg) \,dW_s. \end{equation} Hence (I suppose that it works for piecewise continuous functions), \begin{eqnarray} \int_{0}^T W_t \,dt & = & \int_0^T \int_0^T \mathbf{1}_{[0,t]} (s) \,dW_s \,dt \\ & = & \int_0^T \int_0^T \mathbf{1}_{[0,t]} (s) \,dt \,dW_s\\ & = & \int_0^T T-s \,dW_s\\ & \sim & N \bigg( 0, \int_{0}^T (T-s)^2 \,ds \bigg). \end{eqnarray}

Solution 3:

This is an old question, but it may be worth providing a better answer:

Let $\phi(Y,t,\omega)$ be the conditional characteristic function $\mathbb{E}[\exp(i\omega Y_T)|Y_t=Y] $. By the law of iterated expectations this quantity is a martingale. It is then straightforward to derive a partial differential equation for $\phi$ using Ito's lemma and setting the drift to zero. It will become apparent that the solution takes a Gaussian form.