Understanding what $\sqrt{p}$ means for an event of probability $p$
Say I have a random event $E$ with probability $p$. There is a natural interpretation in terms of $E$ for the probability $p^2$: it's the probability that $E$ occurs twice if I perform two independent trials.
Is there a similar interpretation for the probability $\sqrt{p}$? More generally, given $x \in ]0, 1[$, is there an interpretation of $p^x$?
As already said in the comments, the answer depends very much on the simulation procedures one allows. Assume for example that $p=\frac12$ and that one wants to simulate a bit 0
or 1
with probability $\sqrt{p}=\frac1{\sqrt2}$ of being 1
. Since $\frac1{\sqrt2}$ is not dyadic, this is impossible from a finite collection of unbiased independent bits, but, as soon as one allows a stream of independent unbiased bits with finite but unlimited length, basically everything becomes possible.
To see this in the example at hand, consider the series expansion
$$
\frac1{\sqrt2}=\sum\limits_{n\geqslant0}{2n\choose n}\frac1{2^{3n+1}}.
$$
This suggests the following procedure. First simulate some independent unbiased bits with values 0
or 1
, and count the number N
of 0
s before the first 1
. If N=0
(that is, if the first bit is 1
), return B=1
. Otherwise, simulate 2N
other independent unbiased bits with values 0
or 1
and consider their sum S
. If S=N
, return B=1
, otherwise, return B=0
. Then, B
has probability $\frac1{\sqrt2}$ to be 1
.
The mean number of unbiased bits needed to generate N
is 2, hence the mean number of unbiased bits needed to get each (biased) bit B
is 2+4=6.