Can the variance of two subsets of an observation of a random variable be greater than variance of the original complete set of observations?
Solution 1:
It is assumed that the random variables are defined as equiprobable on the subsets of $\mathbb{R}$ on which they are defined.
$$\frac{1}{|S_1|}\sum_{z\in S_1}(z-\mu(S_1))^2+\frac{1}{|S_2|}\sum_{z\in S_2}(z-\mu(S_2))^2=Var(S_1)+Var(S_2)>\\>Var(S)=\frac{1}{|S|}\sum_{z\in S}(z-\mu(S))^2=\frac{1}{|S|}\sum_{z\in S_1}(z-\mu(S))^2+\frac{1}{|S|}\sum_{z\in S_2}(z-\mu(S))^2$$
We subtract.
$$\left(\frac{1}{|S_1|}-\frac{1}{|S|}\right)\sum_{z\in S_1}(z-\mu(S_1))^2-(z-\mu(S))^2+\\+\left(\frac{1}{|S_2|}-\frac{1}{|S|}\right)\sum_{z\in S_2}(z-\mu(S_2))^2-(z-\mu(S))^2>0$$
Now, each sum is nonpositive, and each leading coefficient is positive. Hence together the left hand side is nonpositive, which is a contradiction.