Random variables defined on the same probability space with different distributions
Consider the real-valued random variable $X$ and suppose it is defined on the probability space $(\Omega, \mathcal{A}, \mathbb{P})$. Assume that $X \sim N(\mu, \sigma^2)$. This means that $$ (1)\text{ } \mathbb{P}(X\in [a,b])=\mathbb{P}(\{w \in \Omega \text{ s.t } X(\omega)\in [a,b]\})=\frac{1}{2}\left(1+\frac{1}{\sqrt{\pi}}\int_{-(\frac{x-\mu}{\sigma \sqrt{2}})}^{(\frac{x-\mu}{\sigma \sqrt{2}})}e^{-t^2}dt\right) $$ In several books I found that we can also say that $X$ is distributed according to $\mathbb{P}$.
Now suppose that we add another random variable $Y$ on the same probability space and assume $Y \sim U([0,1])$. This means that, for $0\leq a\leq b \leq 1$ $$ (2)\text{ } \mathbb{P}(Y \in [a,b])=\mathbb{P}(\{w \in \Omega \text{ s.t } Y(\omega)\in [a,b]\})=b-a $$
Question: the fact that $X$ and $Y$ are defined on the same probability space but have different probability distribution is a contradiction? What is the relation between $\mathbb{P}$, the normal cdf and the uniform cdf? Can we say that both $X$ and $Y$ are distributed according to $\mathbb{P}$ even if they have different distributions?
Solution 1:
Admittedly, a holistic answer to your questions would require more measure-theoretic machinery than what follows. However, I will attempt to give you succinct responses that you might find helpful.
So, let the real-valued random variables $X, Y$ be defined on the same probability space $(\Omega, \Sigma, \mathbb P)$.
1) $X$ and $Y$ are measurable so that, for instance, for the interval of real numbers $[a,b]$, we necessarily have $\left\{X\in[a,b]\right\}, \left\{Y\in[a,b]\right\} \in \Sigma$, while we need not have $$\{X\in[a,b]\} = \{Y\in[a,b]\}.$$
2) Because of 1) above, we need not have $$\mathbb P\left\{X\in[a,b]\right\} = \mathbb P\left\{Y\in[a,b]\right\}.$$
3) Note that, because we may define the probability measure $\mathbb P_X(B):=\mathbb P\{X \in B\}$ over Borel sets $B \in \mathcal B(\mathbb R)$, we can speak of $X$ being distributed according to $\mathbb P_X$. In so doing, we are thinking of $X$ in terms of the probability space $(\mathbb R, \mathcal B(\mathbb R), \mathbb P_X)$, not the probability space $(\Omega, \Sigma, \mathbb P)$. In your example, since $X\sim N(\mu, \sigma^2)$, we have an integral representation of $\mathbb P_X$ with respect to the Lebesgue-measure, so that $$ \mathbb P_X([a,b])=\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{\infty}{\bf{1}}_{[a,b]}(x)e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\mathrm dx\,. $$
A similar development holds for the uniform random variable $Y$.
4) All of the foregoing is just one way of proceeding; there are alternatives. For instance, one may define $X, Y$ over the same measurable-space $(\Omega, \Sigma)$, but different probability-spaces, $(\Omega, \Sigma, \mathbb P_X)$ and $(\Omega, \Sigma, \mathbb P_Y)$, with different probability measures.
Solution 2:
First of all, we have two spaces: the probability space $(\Omega,\mathcal A,\mathbb P)$ and the measurable space $(\mathbb R,\mathcal B(\mathbb R))$, where $\mathcal B(\mathbb R)$ is the Borel $\sigma$-algebra of $\mathbb R$, i.e. the smallest $\sigma$-algebra that contains all open sets.
A random variable $X$ is a measurable function that maps $\Omega$ to $\mathbb R$. Measurable means that $X^{-1}(B)=\{\omega\in\Omega:X(\omega)\in B\}\in\mathcal A$ for each $B\in\mathcal B$. Rouhgly speaking, randomness takes place in the probability space. So we can calculate the probability of an event $A\in\mathcal A$. It is given by $\mathbb P(A)$. However, we are interested in the events $B\in\mathcal B(\mathbb R)$. Measurability enables us to evaluate the probabilities of such events. We have that $\mathbb P(B)=\mathbb P\{\omega\in\Omega:X(\omega)\in B\}$ and here we have the random variable $X$ in the expression. If we take another random variable $Y$, the probability of the event $B$ is then given by $\mathbb P(B)=\mathbb P\{\omega\in\Omega:Y(\omega)\in B\}$ and these probabilities might be different. In general, these probabilities depend on two objects: a random variable and the probability measure $\mathbb P$.
The distribution of the random variable $X$ is the probability measure defined on $\mathcal B(\mathbb R)$ by setting $\mathbb P(B)=\mathbb P\{\omega\in\Omega:X(\omega)\in B\}$ and this distribution might be the uniform distribution, the normal distribution or any other probability measure on $\mathcal B(\mathbb R)$. So the fact that $X$ and $Y$ are defined on the same probability space, but have different probability distribution is not a contradiction. The distribution depends on the random variable, so if we take another random variable defined on the same probability space, we obtain a different distribution.
I hope this helps.
Solution 3:
TL;DR I think the source of your confusion is seeing $X$ and $Y$ as being both the identity random variable in $(\Omega, \mathscr{F},\mathbb P)$.
I'm going to give an example of explicit exponential and uniform distributions in the same probability space in $(\Omega, \mathscr{F},\mathbb P)$.
Consider a random variable $X$ in $((0,1), \mathcal{B}(0,1), Leb)$ given by
$$X(\omega):=\frac{1}{\lambda} \ln \frac{1}{1-\omega}, \lambda > 0$$
It has cdf $F_X(x) = P(X \le x) = (1-e^{-\lambda x})1_{(0,\infty)}$, which we know to be the cdf of an exponentially distributed random variable. (*)
Actually,
$$X(1-\omega):=\frac{1}{\lambda} \ln \frac{1}{\omega}, \lambda > 0$$
also has cdf $F_X(x) = P(X \le x) = (1-e^{-\lambda x})1_{(0,\infty)}$.
Are all cdfs in this mysterious probability space exponential? No!
Now consider the identity random variable $U$ in $((0,1), \mathcal{B}(0,1), Leb)$:
$$U(\omega):=\omega$$
It has cdf $F_U(u) = P(U \le u) = u1_{(0,1)}+1_{(1,\infty)}$, which we know to be the cdf of a uniformly distributed random variable.
The above $X$ and $U$ have different CDFs under the same probability space. The aforementioned explicit representations of the exponential and uniform distributions in this probability space are called Skorokhod representations in $((0,1), \mathcal{B}(0,1), Leb)$.
Now consider the identity random variable $U$ in $(\mathbb R, \mathscr B(\mathbb R), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$
No surprise that $U$ is exponential by definition: $F_U(u) = P(U \le u) = (1-e^{-\lambda \{u\}})1_{(0,\infty)}$.
Now you're wondering: Aha! So every random variable here is exponential right? Well no, for any distribution you can think of, say, uniform, Bernoulli, etc, all have a place here and their Skorokhod representaion is given by:
$$Y(\omega) = \sup(y \in \mathbb{R}: F(y) < \omega)$$
Try for yourself to see for yourself that
$$Y(\omega) = \sup(y \in \mathbb{R}: y1_{(0,1)}+1_{(1,\infty)} < \omega)$$
has uniform distribution in $(\mathbb R, \mathscr B(\mathbb R), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$, i.e. $$P(Y \le y) := P(\sup(y \in \mathbb{R}: y1_{(0,1)}+1_{(1,\infty)} < \omega) \le y) = y1_{(0,1)}+1_{(1,\infty)}$$
Also try to see for yourself that $X(\omega)$ above no longer has exponential distribution in this probability space. (**)
Conclusion: I think the source of your confusion is seeing $X$ and $Y$ as being both the identity random variable in $(\Omega, \mathscr{F},\mathbb P)$. If you were to see them explicitly, you would know that they definitely don't necessarily have the same distribution.
What $\mathbb P$ does is tell you the probabilities of $\omega$'s. So you know how likely the sample point $$0.5 \in \Omega = (0,1)$$ is but not directly how likely the random variable $X$ is equal to a number in its domain such as the real number $$X(0.5) = \frac{1}{\lambda} \ln \frac{1}{1-0.5} \in \mathbb R$$ is. (*) Of course, the probability that $X$ is the real number $X(0.5)$ is
-
dependent on the probability that the sample point $0.5$ because $0.5=1-e^{-\lambda X(0.5)}$
-
not expected to be the same in another probability space, assuming of course that $X$ is in the new probability space, because it now depends on the probability of the sample point/s $X^{-1}(X(0.5))$ aka $X \in \{X(0.5)\}$. (**)
Pf of (*):
Two steps in computing $P(X \le x)$:
-
Find all $\omega \in \Omega = (0,1)$ s.t. $X(\omega) \le x$
-
Compute the probability of all those $\omega$'s.
For $x \le 0$, $P(X\leq x) = P(X \in \emptyset^{\mathbb R}) = P(\emptyset^{\Omega}) = 0$
For $x > 0$, $X(\omega) \le x$
Step 1:
$$ \iff \frac1{\lambda}\ln(\frac{1}{1-\omega}) \le x$$
$$ \iff \omega \le \frac{e^{\lambda x} - 1}{e^{\lambda x}}$$
$$ \iff \omega \in (0,1) \cap (-\infty,\frac{e^{\lambda x} - 1}{e^{\lambda x}})$$
$$ \iff \omega \in (0,\frac{e^{\lambda x} - 1}{e^{\lambda x}})$$
Step 2:
$$Leb(\omega | \omega \in (0,\frac{e^{\lambda x} - 1}{e^{\lambda x}}))$$
$$= Leb((0,\frac{e^{\lambda x} - 1}{e^{\lambda x}}))$$
$$= \frac{e^{\lambda x} - 1}{e^{\lambda x}}$$
QED
Pf of (**):
Actually $X \notin (\mathbb R, \mathscr B(\mathbb R), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$ because we need $\frac{1}{1-\omega} > 0 \iff \omega < 1$.
QED
Same for $X(1-\omega)$ where we need $\frac{1}{\omega} > 0 \iff \omega > 0$.
But we can can further try to show $X$ is not exponential in $((-\infty,1), \mathscr B((-\infty,1)), (1-e^{-\lambda \{u\}})1_{(0,\infty)})$
Pf:
For $x \le 0$, $P(X\leq x) = P(X \in \emptyset^{\mathbb R}) = P(\emptyset^{\Omega}) = 0$
For $x > 0$, $X(\omega) \le x$
Step 1:
$$ \iff \frac1{\lambda}\ln(\frac{1}{1-\omega}) \le x$$
$$ \iff \omega \le \frac{e^{\lambda x} - 1}{e^{\lambda x}}$$
$$ \iff \omega \in (-\infty,1) \cap (-\infty,\frac{e^{\lambda x} - 1}{e^{\lambda x}})$$
$$ \iff \omega \in (-\infty,\min\{1,\frac{e^{\lambda x} - 1}{e^{\lambda x}}\})$$
Step 2:
$$P(\omega | X(\omega) \le x)$$
$$ = P(\omega | \omega \in (-\infty,\min\{1,\frac{e^{\lambda x} - 1}{e^{\lambda x}}\}))$$
$$= \int_{-\infty}^{\min\{1,\frac{e^{\lambda x} - 1}{e^{\lambda x}}\}} d((1-e^{-\lambda \{u\}})1_{(0,\infty)})$$
$$ = 1-e^{-\lambda \min\{1,1-e^{-\lambda t}\}}$$
Doesn't look exponential to me.
QED