What's the difference between Rao-Blackwell Theorem and Lehmann-Scheffé Theorem?

Rao–Blackwell says the conditional expected value of an unbiased estimator given a sufficient statistic is another unbiased estimator that's at least as good. (I seem to recall that you can drop the assumption of unbiasedness and all you lose is the conclusion of unbiasedness; you still improve the estimator. So you can apply it to MLEs and other possibly biased estimators.) In examples that are commonly exhibited, the Rao–Blackwell estimator is immensely better than the estimator that you start with. That's because you usually start with something really crude, because it's easy to find, and you know that the Rao–Blackwell estimator will be pretty good no matter how crude the thing you start with is.

The Lehmann–Scheffé theorem has an additional hypothesis that the sufficient statistic is complete, i.e. it admits no unbiased estimators of zero. It also has an additional conclusion: the estimator you get is the unique best unbiased estimator.

So if an estimator is complete, unbiased, and sufficient, then it's the best possible unbiased estimator. Lehmann–Scheffé gives you that conclusion, but Rao–Blackwell does not. So the statement in the question about what Rao–Blackwell says is incorrect.

It should also be remembered that in some cases it's far better to use a biased estimator than an unbiased estimator.


As Michael Hardy said, Rao-Blackwell only guarantees to improve (or not hurt) your original unbiased estimator. I.e. you start with some unbiased estimator $T(\underline x)$, then you improve it by taking the expected value conditioned on a sufficient statistic $T'(\underline x) := \mathbb{E}[T(\underline x) | S(\underline x)]$. Improve meaning that it's variance will be less or equal than the original estimator variance.

But something funny happens if you add the completeness to the statistic ($S(\underline x)$) - you get uniqueness. I.e. there will only be one unbiased estimator that will be be a function of $S$. And so if you start with some unbiased estimator, and use the Rao-Blackwell method with a sufficient & complete statistic, the result will be the only unbiased estimator for that parameter, and it will have a better variance than any of the original estimator. So it's (U)MVUE (Uniform-ly minimal variance unbiased estimator).

Completeness is a restriction for a statistic that if $\mathbb{E}(g(\underline x)) = 0$ then $g(\underline x) = 0$ always, with probability 1 for any $x$. Take a Bernoulli distribution - and $g(\underline x) = x_1 - x_2$. $\mathbb{E}(x_1 -x_2) = 0$ but $g(x) \neq 0$ when there are different results between the first and second experiment. It's only equal zero when they are the same. So it's not a complete statistic. $h(\underline x) = \sum x_i$ however is a complete statistic, as if it's expected value is equal to zero, it means that it itself is equal to zero with probability of 1.