Independence between a constant random variable and another random variable.

Solution 1:

$X$ and $Y$ are independent if and only if $P(X\in A, Y\in B)=P(X\in A)P(Y\in B)$ for all $A,B$.

Assume $Y=y$ for some $y\in\mathbb{R}$. Then $$P(X\in A, Y\in B)=\left\{\begin{array}{ll}0&\mathrm{if}\,\,y\notin B,\\ P(X\in A)&\mathrm{if}\,\,y\in B\end{array}\right.$$

But notice $P(Y\in B)=1$ if $y\in B$ and $P(Y\in B)=0$ if $y\notin B$.

Solution 2:

Hint: Work with the cumulative distribution functions. Show that for all $x$ and $y$ we have $\Pr(X\le x\cap Y\le y)=\Pr(X\le x)\Pr(Y\le y)$.

Note that if $Y=k$ with probability $1$, then $F_Y(y)=0$ if $y\lt k$, and $F_Y(y)=1$ if $y\ge k$.

Solution 3:

Here is a fun proof using (introductory) measure theory. $\newcommand{\ind}{\perp\kern-5pt\perp}$

Short version

Let $(\Omega, \mathcal{F}, P)$ be a probability space, $C : \Omega \to \Psi$ a constant random variable, $X: \Omega \to \Psi$ an arbitrary random variable, and $\sigma_X$ the $\sigma$-field generated by $X$. Note that $\sigma_C=\{\emptyset, \Omega \}$. But since $\Omega$ and $\emptyset$ are independent of any other event in $\mathcal{F}$, we have $\sigma_C \ind \sigma_X$. Therefore, $C \ind X$.

Long version

(Assumes almost$^\star$ no prior knowledge of measure theory.) First, given some probability space $(\Omega, \mathcal{F}, P)$, note that $\Omega$ and $\emptyset$ are independent of any other event $A \in \mathcal{F}$. To see this, consider the definition of independent events, which is that $A \ind B$ if $P(A \cap B) = P(A)P(B)$. Now, observe that, $P(A \cap \Omega) = P(A) = P(A) \cdot 1 = P(A) P(\Omega)$, so $\Omega$ is independent of any event in $\mathcal{F}$. A similar argument holds for $\emptyset$.

Next, note that if $C : \Omega \to \Psi$ is a constant random variable, then $\sigma_C$, the $\sigma$-field generated by $C$, is trivial. In other words, $\sigma_C = \{ \emptyset, \Omega \}$. To see this, consider that the definition of a sigma field generated by random variable C is $\sigma_C := \{ \{ C^{-1}(B) \}: B \in \mathcal{B}(\Psi) \}$, where $\mathcal{B}(\Psi)$ are the Borel sets of $\Psi$. Then note that if $C$ takes on constant value $c_0 \in \Psi$, then $C^{-1}(B) = \Omega$ if $c_o \in B$, and otherwise $C^{-1}(B) = \emptyset$.

Now note that $\sigma_C$ and $\sigma_X$ must be independent $\sigma$-fields for any random variable $X$. To see this, consider that two $\sigma$-fields $\mathcal{F}$ and $\mathcal{G}$ are defined to be independent if events $F$ and $G$ are independent for any $F \in \mathcal{F}, G \in \mathcal{G}$. We want to show this is true when $\mathcal{F} = \sigma_X, \mathcal{G}=\sigma_C$. But we have already determined that $\sigma_C = \{ \Omega, \emptyset \}$, and that events $\Omega, \emptyset$ are independent from all other events (including events in $\sigma_X$). So we are done.

Finally, note that two random variables X,Y are independent if the $\sigma$-fields generated by them are independent. In other words, $\sigma_X \ind \sigma_Y \implies X \ind Y$. To see this, recall the definition of $X \ind Y$, which is $\forall B, B' \in \mathcal{B}(\Psi), \{ X^{-1}(B)\} \ind \{Y^{-1}(B')\}$. But by construction, $\{ X^{-1}(B)\} \in \sigma_X$ and $\{Y^{-1}(B')\} \in \sigma_Y$, and those events are independent by assumption.

Footnotes

$\star$: The only prerequisites are (1) the definition of a sigma field and (2) Borel sets. The former is introductory and can be looked up. For some sense of the latter, simply consider $\mathcal{B}(\mathbb{R})$, which is the smallest sigma field that contains all the intervals.