Example of Sufficient and Insufficient Statistic?

$\def\E{\mathrm{E}}$Consider samples $X = (X_1,X_2)$ from a normally distributed population $N(\mu,1)$ with unknown mean.

Then the statistic $T(X)=X_1$ is an unbiased estimator of the mean, since $\E(X_1)=\mu$. However, it is not a sufficient statistic - there is additional information in the sample that we could use to determine the mean.

How can we tell that $T$ is insufficient for $\mu$? By going to the definition. We know that $T$ is sufficient for a parameter iff, given the value of the statistic, the probability of a given value of $X$ is independent of the parameter, i.e. if

$$P(X=x|T=t,\mu)=P(X=x|T=t)$$

But we can compute this:

$$P(X=(x_1,x_2) | X_1=t,\mu) = \begin{cases} 0 & \mbox{if }t\neq x_1 \\ \tfrac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(x_2-\mu)^2} & \mbox{if }t=x_1 \end{cases}$$

which is certainly not independent of $\mu$.

On the other hand, consider $T'(X) = X_1+X_2$. Then we have

$$P(X=(x_1,x_2) | X_1+X_2=t, \mu) = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-\frac{1}{2}(s-\mu)^2 - \frac{1}{2}(t-s-\mu)^2}ds$$

and you can complete the square and show that this is independent of $\mu$, and hence $T'$ is a sufficient statistic for the mean $\mu$.