Mutual Independence Definition Clarification

Let $Y_1, Y_2, ..., Y_n$ be iid random variables and $B_1, B_2, ..., B_n$ be Borel sets. It follows that

$P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$...I think?

If so, does the converse hold true? My Stochastic Calculus professor says it does (or maybe misinterpreted him somehow?), but I was under the impression that independence of the n random variables was equivalent to saying for any indices $i_1, i_2, ..., i_k$ $P(\bigcap_{j=i_1}^{i_k} (Y_j \in B_j)) = \Pi_{j=i_1}^{i_k} P(Y_j \in B_j)$.

So, if the RVs are independent, then we can choose $i_j=j$ and k=n to get $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$, but given $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$, I don't know how to conclude that for any indices $i_1, i_2, ..., i_n$ $P(\bigcap_{j=i_1}^{i_k} (Y_j \in B_j)) = \Pi_{j=i_1}^{i_k} P(Y_j \in B_j)$, if that's even the right definition.

p.17 here seems to suggest otherwise. idk

Help please?

Also this:

enter image description here

or

enter image description here

So, this answer is to use the Omega part to establish pairwise independence and ultimately conclude independence. Without that assumption, we cannot conclude independence. Is that right? Why does that not contradict the definition of independence: $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$ ?


I was [under][2] the [impression][3] that independence of the n random variables was equivalent to saying for any indices $i_1, i_2, ..., i_n$ $P(\bigcap_{j=i_1}^{i_n} (Y_j \in B_j)) = \Pi_{j=i_1}^{i_n} P(Y_j \in B_j)$.

You misread: independence of $n$ random variables $(Y_1,\ldots,Y_n)$ is equivalent to the following condition:

(C) For every distinct indices $i_1, i_2, ..., i_k$ and every $B_j$, $P(\bigcap\limits_{j=i_1}^{i_k} (Y_j \in B_j)) = \prod\limits_{j=i_1}^{i_k} P(Y_j \in B_j)$.

Indeed, choosing $k=n$ and $i_j=j$, (C) implies condition (C'):

(C') For every $B_j$, $P(\bigcap\limits_{i=1}^{n} (Y_i \in B_i)) = \prod\limits_{i=1}^{n} P(Y_i \in B_i)$.

In the other direction, if (C') holds, then, for every distinct indices $i_1, i_2, ..., i_k$ and every $B_j$, one can complete the collection of events by $(Y_s\in\mathbb R)$ for the $n-k$ missing indices $s$, then (C) follows.