For the binomial distribution, why does no unbiased estimator exist for $1/p$?

Assume that $U:X\mapsto U(X)$ is an unbiased estimator of $1/p$ for some given value $p$ in $(0,1]$. This means that $E_p(U(X))=1/p$, that is, that $G(p)=1$, where $$ G(p)=pE_p(U(X))=\sum_{k=0}^n{n\choose k}U(k)p^{k+1}(1-p)^{n-k}. $$ Since $G$ is a polynomial of degree at most $n+1$, the equation $G(p)=1$ has at most $n+1$ roots. Thus, any estimator $U$ of $1/p$ can be unbiased for at most $n+1$ values of $p$. In particular, no estimator of $1/p$ can be unbiased for every $p$ in $(0,1)$ (the situation the question asks about). Likewise, no estimator of $1/p$ can be unbiased for every $p$ in $(1/2,1)$ (a situation such that $1/p$ is uniformly bounded, as mentioned in the comments).

The same applies to every rational fraction $Q(p)/R(p)$ instead of $1/p$, for some polynomials $Q$ and $R$ such that $Q(0)\ne0=R(0)$.


Note that $$\frac{1}{p} = \frac{1}{1-(1-p)} = 1+(1-p)+\frac{(1-p)^2}{2!}+...$$

Assume there exists unbiased estimator $T(x)$ for $\frac{1}{p}$. Then we have $$\sum_{k=0}^{n} T(k){n \choose k}p^k(1-p)^k = 1+(1-p)+\frac{(1-p)^2}{2!}+... $$

Observe that the LHS is a finite power series, and the RHS is an infinite one. Hence they cannot be equal.


Suppose $X_1,...,X_n$ iid Bernoulli($\theta$). If we want to estimate $\tau(\theta)=\frac{1}{\theta}$, we then find that no unbiased estimator exists.

We can use MLE of $\tau(\theta)=\frac{1}{\theta}$, which is $\tau(\hat\theta)=\frac{1}{\bar x}$, by invariance principle. But this is biased.

Now I provide the proof to show that no unbiased estimator exists. We can use contradiction.

Suppose W(x) is an unbiased estimator of $\frac{1}{\theta}$. That means whatever estimator we choose (not only MLE), it cannot be unbiased. Let M=supW(x). Recognize that $M<\infty$, since M is from finite sample. Note that our sample x($X_1,...,X_n$) is either 0 or 1. We have $2^n$ possible data sets. It is finite, thus $M<\infty$. (In analysis, we can also say sup is just your maximum due to finiteness.)

Then $E_\theta$ W(x) $\leq M$ for all $\theta$. In other words, M is an upper bound.

By unbiased definition, $E_\theta$ W(x) = $\frac{1}{\theta}. $But it is$ \rightarrow \infty$ as $\theta \rightarrow 0$. Note that it is allowed to let $\theta \rightarrow 0$ because $\theta$ is in (0,1) and we can let $\theta$ be very small. And when $\theta$ is very small, then $ \frac{1}{\theta}\rightarrow \infty$.

But previously, we show that $E_\theta$ W(x) $\leq M$ for all $\theta$, including the case when $\theta$ is very small. Hence, this is contradiction.