When does pairwise independence imply independence?

One situation in which this is true is when the random variables involved are jointly normally distributed. If $X_1,\ldots,X_n$ are jointly distributed in such a way that for every sequence $a_1,\ldots,a_n$ of constants (i.e. non-random), the random variable $a_1 X_1+\cdots+a_n X_n$ has a $1$-dimensional normal distribution, then that is joint normality of the distribution of $X_1,\ldots,X_n$. If these are pairwise independent, then they are independent.

At the opposite extreme, you have the case of $Y_1,Y_2, Y_3$, where $Y_1, Y_2$ are independent and identically distributed and $\Pr(Y_1=1)=p\in(0,1)$ and $\Pr(Y_1=0)=1-p$, and $Y_3$ is the mod-$2$ sum of $Y_1$ and $Y_2$. These three are pairwise independent, but the values of any two of them determine the value of the third.


I was also wondering the same, and I think it's instructive to just restate mutual independence in an equivalent way. I will do it below for 3 real random variables but the statement is generalizable.

Let $X, Y, Z$ be pairwise independent random variables and also assume that $(X,Y)$ is independent of $Z$. Then $X,Y,Z$ are mutually independent.

To prove this, just pick Borel sets $A, B, C$, then it holds that:

$$ \begin{aligned} \Pr[ X \in A, Y \in B, Z \in C] &= \Pr[ (X,Y) \in A\times B, Z \in C] \\ &= \Pr[(X,Y) \in A\times B] \Pr[Z \in C] \\ &= \Pr[X \in A, Y \in B] \Pr[Z \in C] \\ &= \Pr[X \in A] \Pr[X \in B] \Pr[Z \in C] \end{aligned} $$