Suppose there is a coin, we flipped it for 100 times and want to check if it is fair. Let $X_{1}, X_{2}, ..., X_{100}$ be the outcome of each experiment, which is a series of random variables following i.i.d. Bernoulli distributions with parameter p. For $i^{th}$ experiment, if we get a head, then $X_{i} = 1$, otherwise $X_{i} = 0$.

Our hypothesis: $H_{0}: p = \frac{1}{2}$ against $H_{1}: p \neq \frac{1}{2}$

Let $X = \sum\limits^{100}_{i=1}X_{i}$, if $40\leq X \leq 60$ then don't reject the null hypothesis $H_{0}$, otherwise reject the null hypothesis. Obviously, $X \sim Bin(100, p)$.

$\mathbb{P}(40\leq X \leq 60) = \sum\limits^{60}_{x=40} \binom{100}{x}p^{x}(1-p)^{100-x}$.

Type II error is the probability that we can't reject the null hypothesis given that the alternative hypothesis is true. Suppose P is an uniform random variable defined on $(0,\frac{1}{2})\cup(\frac{1}{2}, 1)$, we have:

$\mathbb{P}(40\leq X \leq 60)\\ = \int^{1}_{0}\mathbb{P}(40\leq X \leq 60 | p \in (0,\frac{1}{2})\cup(\frac{1}{2}, 1))f_{P}(p)\, dp\\ = \int^{1}_{0}\mathbb{P}(40\leq X \leq 60 | p \in (0,\frac{1}{2})\cup(\frac{1}{2}, 1))\, dp\\ = \sum\limits^{60}_{x=40} \int^{1}_{0}\binom{100}{x}p^{x}(1-p)^{100-x}\, dp\\ = \sum\limits^{60}_{x=40}\int^{1}_{0}\frac{100!}{x!(100-x)!}p^{x}(1-p)^{100-x}\, dp\\ = \sum\limits^{60}_{x=40}\frac{100!}{101!}\int^{1}_{0}\frac{\Gamma(102)}{\Gamma(x+1)\Gamma(101-x)}p^{x}(1-p)^{100-x}\, dp\\ = \sum\limits^{60}_{x=40}\frac{100!}{101!} = \frac{21}{101}$

What's wrong with my solution?


Power of a two-sided binomial test.

Testing $H_0: p=.5$ against $H_a: p\ne .5$ based on $n=100$ Bernoulli trials $X_i,$ the significance level is $P(\mathrm{Rej}\,|\,p=.5) = P(|S_{100}-50|>10\, |\, p=.5)=0.0352.$ Here $$S = S_{100}= \sum_{i=1}^{100}X_i \sim\mathsf{Binom}(n=100, p=.5).$$

The probability is found by computation in R (where dbinom is a binomial PDF). We sum the probabilities for all possibilities that can lead to rejecting $H_0.$

sum(dbinom(c(0:39,61:100), 100, .5)) 
[1] 0.0352002

Note: Alternatively, you could approximate this probability, by standardizing and using a standard normal distribution [also continuity correction and (for $p=.5)$ symmetry]. In R, pnorm is a normal CDF.

     z = ((39.5-50)/5)
     2*pnorm(z, 0, 1)
     [1] 0.03572884

Then the power $\pi(p_0)$ for the particular alternative $p_0\ne 0.5$ is $P(\mathrm{Rej}\, |\,p_0) = P(|S-50|>10\,|\,p_0).$

In particular, $\pi(1/3)$, from R:

sum(dbinom(c(0:39,61:100),100,1/3)) 
[1] 0.9033769

Addendum: Also, by letting $p_0$ take fifty closely sequenced values in $(0,1),$ we can make a power curve, showing power of our test for various values of $p.$

p = seq(0,1, len=50); Power=numeric(50)
for(i in 1:50) {
 Power[i] = sum(dbinom(c(0:39,61:100),100,p[i]))
}
hdr = "Power Curve"
plot(p, Power, type="l", col="blue",
     lwd=2, ylim=0:1, main=hdr)
 abline(v=0:1, col="green2")
 abline(h=0, col="green2")
 abline(h=0.0352, col="red")
 points(.5, 0.0352, pch=19, col="red")

enter image description here

The single point (red) at $(0.5, 0.0352)$ shows the significance level, not a power value.

Reference: Here is discussion of power for a one-sided binomial test.