Random sample as random variables?

Solution 1:

It would be the latter.

If you want a sequence of independent random variables $X_1, X_2, \cdots, X_n$ with the same distribution $\mathcal{D}$, and you know that there is some probability space $\Omega=(\Omega, \mathcal{F}, P)$ and a random variable $X:\Omega \to \mathbb{R}$ with distribution $\mathcal{D}$, the standard trick to construct an appropriate probability space for the $X_i$'s is defining a new probability space $\Omega'=\Omega^n$ with $P'$ the product measure of $P$, with $X_i(\omega_1, \omega_2, \cdots, \omega_n) = X(\omega_i)$. Then it is not hard to check that for any collection $\{j_1, \cdots j_k\} \subset [n]$ and any sequence $S_1, S_2, \cdots, S_k \subset \mathcal{B}(\mathbb{R})$ that $$P'\left( \bigwedge_{i=1}^{k} X_{j_i} \in S_{i}\right) = \prod_{i=1}^k P'(X_{j_i} \in S_{i})$$

This idea of extending a probability space as needed, to account for potentially new sources of randomness, is fundamental to probability theory and often glossed over. The crucial thing is that this extension doesn't wreck anything. For instance, if I am only looking at the first coordinate $i=1$ in the product space, and I define a random variable only dependent on the first coordinate i.e., a random variable factoring through the projection of the first coordinate $\pi_1$, then the distribution of this random variable is identical to the identified random variable on the original (non-product) space $\Omega$, since in particular $P'(\pi_1^{-1}(E)) = P(E)$. This is akin to observation that if I roll $n$ dice independently, the probability distribution of the first die roll would be no different than had I only rolled a single die.

This blog post by Terry Tao goes over this in great detail.