Criterion for independency of random variables

Solution 1:

  1. "Any two functions" means Borel measurable functions (we need to integrate random variables) for which the integrals make sense (for example bounded functions). It's enough to do the test among $g$ and $h$ continuous bounded functions. Indeed, we can approximate pointwise the characteristic function of a closed set by a sequence of continuous bounded functions, hence if $F_1$ and $F_2$ are closed, we have $\mu\{(X,Y)\in F_1\times F_2\}=\mu\{X\in F_1\}\mu\{Y\in F_2\}$. Then we can extend this identity to $B_1$ and $B_2$ arbitrary Borel subsets.

  2. Actually, it seems that we rather use the direction "$X$ independent of $Y$" implies $E[g(X)h(Y)]=E[g(X)]E[h(Y)]$ for $g,h$ measurable (bounded functions) than the converse. Indeed, for the latter, we check the equality when $g(x)=e^{isx}$ and $h(x)=e^{ity}$ for any $s,t\in\mathbb R$ fixed.