Are two random vectors independent, iff every pair of components from each vector are independent?
Solution 1:
It is not true in general: Consider, as a counterexample in $\Bbb{R}^2$, $x_i$ each chosen as a uniform variate on $(0,1)$ and $y_i$ formed using two further uniform variates $u_i$ by $$ \begin{array}{c} y_1 = (x_1 - x_2)_{\mod 1}+u_1 \\ y_2 = (x_1 + x_2)_{\mod 1}+u_2 \\ \end{array} $$ Then knowledge of either $x_i$ alone does not affect the distribution of either $y_i$, but knowledge of the whole vector $\vec{x}$ heavily affects the distribution of the $\vec{y}$; and knowledge of $\vec{y}$ does not effect the distribution of (say) $x_1$ unless you also know something about the value of $x_2$. So every pair of components are independent, but the two vectors are not.
(Obviously, the other direction does hold: If the vectors are independent, then pairs of one variate from each must be independent.)
If $X$ and $Y$ are jointly normal distributed, then pairwise independence does imply cector independence.
The second statement is the trickiest. If $X$ and $Y$ are both normally distributed, you can try for have a similar counterexample: This time, let $x_i$ each be chosen as a unit normal and take $$ \begin{array}{c} y_1 = N[(x_1 - x_2)_{\mod 1}]\\ y_2 = N[(x_1 - x_2)_{\mod 1}] \end{array} $$ where $N[\mu]$ stands for a normal distribution about mean $\mu$ and unit variance. But here, because you do have some knowledge that $x_2$ is likely near zero, knowledge of either $y_i$ does affect the distribution of $x_1$.
The counterexample can be found, however, by replacing $(x_1 - x_2)_{\mod 1}$ with something like $$ \left( e^{(x_1-x_2)^2/2} \right)_{\mod 1} $$ (and similarly for $(x_1 + x_2)_{\mod 1}$) so that the lack of knowledge of the exact value of $x_2$ again washes out any information obtained from knoing $y_1$.