A Brownian motion $B$ that is discontinuous at an independent, uniformly distributed random variable $U(0,1)$
Solution 1:
$$P(B(U)=0)=\int_0^1P(B(u)=0)\,\mathrm du=0$$
Solution 2:
I'd like to add some details to Did's answer, for my own future reference.
PART I: SOME RELEVANT DEFINITIONS AND RESULTS FROM THE LITERATURE
Definition 1: A separable metric space ([R], Definition 1.41, p. 19) A metric space $\left(M,d\right)$ is separable iff it contains a countable, dense subset. (The empty set is regarded as separable.)
Definition 2: A Polish space ([F] section 18.1, definition 1, p. 347) A Polish space is a complete metric space that has a countable dense subset.
Example 3: Examples of Polish spaces ([F] section 18.1)
a) (p. 347) The real line $\mathbb{R}$ with the usual Euclidean metric is a Polish space.
b) (Example 1, pp. 348-349) $\mathbb{R}^\infty$, consisting of all sequences $\left(x_1,x_2,\dots\right)$ of real numbers, with the product topology, is a Polish space.
c) (Problem 3, pp. 349-350) Let $\mathbf{C}\left[0,1\right]$ denote the metric space of continuous $\mathbb{R}$-valued functions on $\left[0,1\right]$ with the distance between two functions $f$ and $g$ being defined as the quantity $$ \max\left(\left\{\left|f\left(t\right) - g\left(t\right)\right|: g \in \left[0,1\right]\right\}\right) $$ Then $\mathbf{C}\left[0,1\right]$ is a Polish space, and the Borel $\sigma$-field it generates equals $$ \sigma\left(\left\{f : f\left(t\right) \in B\right\} : t \in \left[0,1\right], B \in \mathcal{B}\right) $$
Proposition 4: The product of Polish spaces is Polish ([F] proposition 2, p. 349) For $j = 1, 2, \dots,$ let $\left(\Psi_j, \rho_j\right)$ be a Polish space, and let $\Psi = \otimes_{j = 1}^\infty \Psi_j$, with the topology on $\Psi$ being the product topology. Then $\left(\Psi, \rho\right)$ is a Polish space.
Definition 5: Isomorphic measurable spaces ([F] Definition 17, p. 418) Two measurable spaces are called isomorphic iff there exists a bijective function $\varphi$ between them such that both $\varphi$ and $\varphi^{-1}$ are measurable.
Definition 6: A Borel space ([F] definition 18, p. 418) A measurable space is called a Borel space iff it is isomorphic to some $\left(A,\mathcal{B}\left(A\right)\right)$, where $A$ is a Borel set in $\left[0,1\right]$ and $\mathcal{B}\left(A\right)$ is the $\sigma$-field of Borel subsets of $A$.
Proposition 7: A Polish space is a Borel space as is the product of Borel spaces ([F] proposition 20, p. 419) Every Polish space is a Borel space. A product of a finite or countable number of Borel spaces is itself a Borel space.
Proposition 8: Product and Borel $\sigma$-fields ([K] lemma 1.2, p. 3) Let $S_1, S_2, \dots$ be separable metric spaces. Then $$ \mathcal{B}\left(S_1\times S_2\times \cdots\right) = \mathcal{B}\left(S_1\right)\otimes\mathcal{B}\left(S_2\right)\otimes\cdots $$
Theorem 9: Existence and uniqueness of conditional distributions ([F] theorem 19, p. 418) Denote by $\left(\Psi, \mathcal{H}\right)$ a Borel space, by $\left(\Omega, \mathcal{F}, P\right)$ a probability space, and by $\mathcal{G}$ a sub-$\sigma$-field of $\mathcal{F}$. Then every $\left(\Psi, \mathcal{H}\right)$-valued random variable defined on $\left(\Omega, \mathcal{F}, P\right)$ has a conditional distribution $Z$, given $\mathcal{G}$. Moreover, such conditional distributions are unique in the sense that if $Z'$ is any other conditional distribution of $X$ given $\mathcal{G}$, then $Z = Z'$ a.s.
Remark 10: A conditional distribution is a regular conditional distribution A "conditional distribution" is a term used in [F] (defined in section 21.2, definition 9, p. 413) to refer to the same concept as the one denoted by the term "regular conditional distribution" by other authors, such as [K] (see pp. 106-107).
Theorem 11: A canonical model for a sequence of independent random variables ([W] theorem 8.7, p. 81) Let $\left(\Lambda_n : n\in\mathbb{N}_1\right)$ be a sequence of probability measures on $\left(\mathbb{R}, \mathcal{B}\right)$. Define $$ \Omega = \prod_{n \in \mathbb{N}_1}\mathbb{R} $$ so that a typical element $\omega$ of $\Omega$ is a sequence $\left(\omega_n\right)$ in $\mathbb{R}$. Define $$ X_n : \Omega \rightarrow \mathbb{R},\ X_n\left(\omega\right) := \omega_n $$ and let $\mathcal{F} := \sigma\left(X_n : n\in\mathbb{N}_1\right)$. Then there exists a unique probability measure $P$ on $\left(\Omega, \mathcal{F}\right)$, such that for $r\in \mathbb{N}_1$ and $B_1, B_2, \dots, B_r \in \mathcal{B}_r$, $$ P\left(\left(\prod_{1 \leq k \leq r}B_k\right) \times \prod_{k > r}\mathbb{R}\right) = \prod_{1 \leq k \leq r}\Lambda_k\left(B_k\right) $$ We write $$ \left(\Omega, \mathcal{F}, P\right) = \prod_{n \in \mathbb{N}_1}\left(\mathbb{R}, \mathcal{B}, \Lambda_n\right) $$ Then the sequence $\left(X_n : n \in \mathbb{N}_1\right)$ is a sequence of independent random variables on $\left(\Omega, \mathcal{F}, P\right)$, $X_n$ having law $\Lambda_n$.
Proposition 12: A Brownian motion $\left(B_t\left(\omega\right)\right)_t$ is measurable w.r.t to $\left(t,\omega\right)$ ([M] Remark 1.5, p. 12) If Brownian motion is constructed as a family $\left\{B_t : t \geq 0\right\}$ of random variables on some probability space $\Omega$, the mapping $\left(t, \omega\right) \mapsto B\left(t, \omega\right)$ is measurable on the product space $\left[0,\infty\right)\times\Omega$.
Definition 13: A Gaussian stochastic process ([M] Remark 1.6, p. 12) A stochastic process $\left(Y_t\right)_{t\geq 0}$ is called a Gaussian process, if for all $t_1 < t_2 < \cdots < t_n$ the vector $\left(Y\left(t_1\right), \dots, Y\left(t_n\right)\right)$ is a Gaussian random vector. In particular, for all $t$, $Y_t$ is a Normal random variable.
Proposition 14: The finite distributions of a Brownian motion are Gaussian ([M] Remark 1.6, p. 12) Brownian motion with start at $x \in \mathbb{R}$ is a Gaussian process.
PART II: A PROOF OF THE ORIGINAL QUESTION POSTED BY OP (=ME)
According to Paul Lévy's construction of Brownian motion described in [M] (theorem 1.3, pp. 9 - 12), an arbitrary Brownian motion can be defined in any probability space $S = \left(\Omega, \mathcal{A}, P\right)$ where a matrix $\left(Z_{i,j}\right)_{i,j \in \mathbb{N}_1}$ of i.i.d. standard normal random variables is defined (the first matrix row is used to construct the Brownian motion on the interval $\left[0, 1\right)$, the second row is used to construct the Brownian motion on the interval $\left[1, 2\right)$, etc.). We shall enumerate $\left(Z_{i,j}\right)_{i,j \in \mathbb{N}_1}$ with a single index: $\left(Z_n\right)_{n \in \mathbb{N}_1}$. In view of theorem 11 above, we can take $\Omega$ to be $\mathbb{R}^\infty$ with $\mathcal{A}$ being the product $\sigma$-algebra and we can take $Z_n$ ($n \in \mathbb{N}_1$) to be the projection on the $n$th coordinate. Additionally, we may assume w.l.g. that $Z_0$ is a projection on an additional, zero-th, coordinate, such that $Z_0 \sim U\left(\left[0,1\right]\right)$ and $\left(Z_n\right)_{n \in \mathbb{N}_0}$ are independent. Define $U := Z_0$ and $I := \left(Z_n\right)_{n \in \mathbb{N}_1}$.
Based on the first eight items listed in part I, $I$ and $\left(U,I\right)$ are random objects that take values in Borel spaces (namely $\left(\mathbb{R}^\infty, \mathcal{B}_\infty\right)$ and $\left(\mathbb{R}, \mathcal{B}\right)\otimes\left(\mathbb{R}^\infty,\mathcal{B}_\infty\right)$, respectively). Hence, by theorem 9, there exist Markov kernels $K = P\left(I \in \cdot\mid U\right)$ and $Q = P\left(\left(U, I\right) \in \cdot\mid U\right)$. By an obvious extension of the theorem proved here, $U$ can be treated as a constant on the left hand side of the conditioning bar in $P\left(\left(U, I\right) \in \cdot\mid U\right)$, so that, by total probability, $$ P\left(\left(U, I\right)\in A\right) = E\left(P\left(\left(U, I\right)\in A \mid U = u\right)\right) = E\left(P\left(I \in A_u \mid U = u\right)\right) $$
In particular, $$ P\left(B\left(U\right) = 0\right) = P\left(\left(U,I\right) \in \left\{B = 0\right\}\right) = E\left(P\left(I \in \left\{B_u = 0\right\} \mid U = u\right)\right) $$
But since $U$ and $I$ are independent, $P\left(I \in A \mid U\right) = P\left(I \in A\right) = P\left(A\right)$, hence, by proposition 14, $$ E\left(P\left(I \in \left\{B_u = 0\right\} \mid U = u\right)\right) = \int P\left(B_u = 0\right)\ P_U\left(\textrm{d}u\right) = \int 0\ \textrm{d}P_U = 0 $$
Q.E.D.
PART III: WORKS CITED
- [F] Fristedt, Bert and Gray, Lawrence. A Modern Approach to Probability Theory. Springer, 1997
- [K] Kallenberg, Olav. Foundations of Modern Probability. 2nd Edition. Springer, 2001
- [M] Mörters, Peter and Peres, Yuval. Brownian Motion. Version downloaded from Yuval Peres's website. Accessed 2014-02-18
- [R] Rynne, Bryan P. and Youngston, Martin A. Linear Functional Analysis. 2nd Edition. Springer, 2008
- [W] Williams, David. Probability with Martingales. Cambridge University Press, 1991