Why are all subset sizes equiprobable if elements are independently included with probability uniform over $[0,1]$?

The answer is as simple and elegant as I thought it must be, and is given in this answer (which states that Bayes used this argument).

To decide whether to include an element in the subset, we can generate a number $r$ uniformly randomly in $[0,1]$; we include the element if $r\lt p$.

Now consider the probability $p$, which is also uniformly randomly drawn from $[0,1]$, as an $(n+1)$-th number of the same kind. The size of the subset is the number of times that $r\lt p$. By symmetry $p$ is equally likely to have any of the $n+1$ ranks among these $n+1$ numbers.


The following is in terms of generating functions. It does not use binomials or beta integrals.

A coin has probability $p$ for heads, and is thrown $n$ times. If you obtain $r$ heads you gain $x^r$. The expected gain then is $$E(p)=\bigl(px+(1-p)\bigr)^n\ .$$ As $p$ is uniformly distributed over $[0,1]$ we now have to calculate $$E:=\int_0^1 E(p)\>dp={1\over n+1}{\bigl(px+(1-p)\bigr)^{n+1}\over x-1}\Biggr|_{p=0}^{p=1}={1\over n+1}{x^{n+1}-1\over x-1}\ ,$$ hence $$E={1\over n+1}(1+x+x^2+\ldots+x^n)\ .$$ This shows that each $r\in[0\>..\>n]$ has the same "overall" probability to occur.