Probability density function of the integral of a continuous stochastic process

Solution 1:

You may use a computational method to approximate as accurate as wanted the probability density function of $I(t)=\int_0^t \cos(B(s))\,\mathrm{d}s$. I will do so for $0\leq t\leq 1$.

Consider the Karhunen-Loève expansion of $B(t)$ on $[0,1]$: $$ B(t)=\sum_{j=1}^\infty \frac{\sqrt{2}}{\left(j-\frac12\right)\pi}\sin\left(\left(j-\frac12\right)\pi t\right)\xi_j, $$ where $\xi_1,\xi_2,\ldots$ are independent and $\text{Normal}(0,1)$ distributed random variables. The convergence of the series holds in $L^2([0,1]\times\Omega)$. Truncate $$ B_J(t)=\sum_{j=1}^J \frac{\sqrt{2}}{\left(j-\frac12\right)\pi}\sin\left(\left(j-\frac12\right)\pi t\right)\xi_j, $$ and define $I_J(t)=\int_0^t \cos(B_J(s))\,\mathrm{d}s$.

It is easy to see that:

  1. $I_J(t)\rightarrow I(t)$ in $L^2(\Omega)$, for each $0\leq t\leq 1$. Indeed, by Cauchy-Schwarz inequality, $\mathbb{E}[(I(t)-I_J(t))^2]\leq t\|B(t)-B_J(t)\|_{L^2([0,t]\times\Omega)}^2\rightarrow 0$, as $J\rightarrow\infty$.

  2. $I_J(t)\rightarrow I(t)$ almost surely, for each $0\leq t\leq 1$. Indeed, for each fixed $\omega\in\Omega$, we know that $B_J(t)(\omega)\rightarrow B(t)(\omega)$ in $L^2([0,1])$, because deterministic Fourier series converge in $L^2$. Since $\cos$ is Lipschitz, $\cos(B_J(t)(\omega))\rightarrow \cos(B(t)(\omega))$ in $L^2([0,1])$. Then $I_J(t)(\omega)\rightarrow I(t)(\omega)$ for each $t\in[0,1]$ follows.

Although these two facts are not enough to have that the density functions of $\{I_J(t):J\geq1\}$ tend to the density function of $I(t)$, we have that the density function of $I_J(t)$ may be a very good candidate (recall that this is a computational method, not a proof). The good thing about $I_J(t)$ is that is consists of a finite number of $\xi_j$, so it is possible to obtain exact realizations of $I_J(t)$. And if we generate a sufficiently large number $M$ of realizations of $I_J(t)$, then a kernel density estimation allows obtaining an approximate density function for $I_J(t)$.

I have written a function in Mathematica to approximate the distribution of $I(T)$, for $0\leq T\leq 1$, using a truncation order $J$ and a number of simulations $M$:

distributionIT[T_, J_, simulations_] := 
  Module[{realizationsIT, simulation, xi, BJ, distribution},
   realizationsIT = ConstantArray[0, simulations];
   For[simulation = 1, simulation <= simulations, simulation++,
    xi = RandomVariate[NormalDistribution[0, 1], J];
    BJ[t_] := 
     Sqrt[2]*Sum[
       Sin[(j - 0.5)*Pi*t]/((j - 0.5)*Pi)*xi[[j]], {j, 1, J}];
    realizationsIT[[simulation]] = NIntegrate[Cos[BJ[t]], {t, 0, T}];
    ];
   distribution = SmoothKernelDistribution[realizationsIT];
   Return[distribution];
   ];

Let us do a numerical example. Choose $T=1$. Write

distribution1 = distributionIT[1, 40, 50000];
plot1 = Plot[PDF[distribution1, x], {x, -2, 2}, Frame -> True, PlotRange -> All];
distribution2 = distributionIT[1, 50, 100000];
plot2 = Plot[PDF[distribution2, x], {x, -2, 2}, Frame -> True, PlotRange -> All];
Legended[Show[plot1, plot2], LineLegend[{Green, Blue}, {"J=40, M=50000", "J=50, M=100000"}]]

enter image description here

We plot the estimated density function for $J=40$, $M=50000$ and $J=50$, $M=100000$. We observe no differences, so our method provides a good approximation of the probability density function of $I(1)$.

Similar computations for $T=0.34$ give the following plot:

enter image description here

If you plot the approximate density function for smaller $T$, you will see that at the end one gets a Dirac delta at $0$, which agrees with the fact that $I(0)=0$ almost surely.

Remark: Computational methods are of constant use in research to approximate probabilistic features of response processes to random differential equations. See for example [M. A. El-Tawil, The approximate solutions of some stochastic differential equations using transformations, Applied Mathematics and Computation 164 (1) 167–178, 2005], [D. Xiu, Numerical Methods for Stochastic Computations. A Spectral Method Approach, Princeton University Press, 2010], [L. Villafuerte, B. M. Chen-Charpentier, A random differential transform method: Theory and applications, Applied Mathematics Letters, 25 (10) 1490-1494, 2012].

Solution 2:

Well the first moment is relatively straightforward. If the integral exists than

$E[\int_a^b X(t) dh(t)] = \int_a^b E[X(t)]dh(t)$

See for example Grigoriu 3.69.

Also by for example Grigoriu 3.58

$E[ \int_a^b X(t) dt \int_a^b Z(t) dt] = \int_{[a,b]^2}E[X(u)Z(v)]du dv$