Components of an n-dimensional random variable from a uniform distribution are independent
Solution 1:
Suppose $(X_1, ... , X_n)$ is uniformly distributed on $M_1\times...\times M_n \subset \mathbb{R}^n$, where all $M_i$ are measurable subsets of $\mathbb{R}$ with finite positive measure. Then $\forall k < n$ we have $P(X_k \in M_i)=1$ and $\forall K_1\times...\times K_n \subset M_1\times...\times M_n$, such that all $K_i$ are measurable subsets of $M_i$, we have $$P((X_1, ... , X_n) \in K_1\times...\times K_n) = \frac{\mu(K_1\times...\times K_n)}{\mu(M_1\times...\times M_n)}=\frac{\Pi_{i=1}^n \mu(K_i)}{\Pi_{i=1}^n \mu(M_i)}=\Pi_{i=1}^n\frac{\mu(K_i)}{\mu(M_i)}=\Pi_{i=1}^nP(X_i \in K_i)$$
where $\mu$ stands for Lebesgue measure. From that we can conclude, that $X_1, ... , X_n$ are independent.
However, for arbitrary measurable $A \subset \mathbb{R}^n$ with $0<\mu(A)<+\infty$, the components of a random variable, uniformly distributed on it are not always independent. For, example, if $n = 2$ and $A = \{(x, y)\in [0;2]^2|x + y \geq 2\}$, then $P((X_1, X_2)\in[0;1]^2) = 0$, despite both $P(X_1\in[0;1])$ and $P(X_2 \in [0;1])$ being non-zero.